00:00:00.001 Started by upstream project "autotest-per-patch" build number 130555 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.048 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.049 The recommended git tool is: git 00:00:00.049 using credential 00000000-0000-0000-0000-000000000002 00:00:00.051 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.084 Fetching changes from the remote Git repository 00:00:00.087 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.131 Using shallow fetch with depth 1 00:00:00.131 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.131 > git --version # timeout=10 00:00:00.167 > git --version # 'git version 2.39.2' 00:00:00.167 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.190 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.190 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.327 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.339 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.351 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:04.351 > git config core.sparsecheckout # timeout=10 00:00:04.362 > git read-tree -mu HEAD # timeout=10 00:00:04.380 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:04.400 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:04.401 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:04.500 [Pipeline] Start of Pipeline 00:00:04.515 [Pipeline] library 00:00:04.517 Loading library shm_lib@master 00:00:04.517 Library shm_lib@master is cached. Copying from home. 00:00:04.535 [Pipeline] node 00:00:04.544 Running on CYP10 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.545 [Pipeline] { 00:00:04.557 [Pipeline] catchError 00:00:04.558 [Pipeline] { 00:00:04.571 [Pipeline] wrap 00:00:04.579 [Pipeline] { 00:00:04.587 [Pipeline] stage 00:00:04.589 [Pipeline] { (Prologue) 00:00:04.794 [Pipeline] sh 00:00:05.085 + logger -p user.info -t JENKINS-CI 00:00:05.100 [Pipeline] echo 00:00:05.101 Node: CYP10 00:00:05.109 [Pipeline] sh 00:00:05.410 [Pipeline] setCustomBuildProperty 00:00:05.418 [Pipeline] echo 00:00:05.419 Cleanup processes 00:00:05.424 [Pipeline] sh 00:00:05.711 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.711 3635585 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.728 [Pipeline] sh 00:00:06.015 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.015 ++ grep -v 'sudo pgrep' 00:00:06.015 ++ awk '{print $1}' 00:00:06.015 + sudo kill -9 00:00:06.015 + true 00:00:06.027 [Pipeline] cleanWs 00:00:06.036 [WS-CLEANUP] Deleting project workspace... 00:00:06.036 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.044 [WS-CLEANUP] done 00:00:06.049 [Pipeline] setCustomBuildProperty 00:00:06.063 [Pipeline] sh 00:00:06.346 + sudo git config --global --replace-all safe.directory '*' 00:00:06.455 [Pipeline] httpRequest 00:00:06.794 [Pipeline] echo 00:00:06.796 Sorcerer 10.211.164.101 is alive 00:00:06.806 [Pipeline] retry 00:00:06.808 [Pipeline] { 00:00:06.822 [Pipeline] httpRequest 00:00:06.826 HttpMethod: GET 00:00:06.827 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:06.827 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:06.830 Response Code: HTTP/1.1 200 OK 00:00:06.830 Success: Status code 200 is in the accepted range: 200,404 00:00:06.831 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:06.977 [Pipeline] } 00:00:06.995 [Pipeline] // retry 00:00:07.002 [Pipeline] sh 00:00:07.287 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:07.304 [Pipeline] httpRequest 00:00:07.930 [Pipeline] echo 00:00:07.932 Sorcerer 10.211.164.101 is alive 00:00:07.943 [Pipeline] retry 00:00:07.945 [Pipeline] { 00:00:07.962 [Pipeline] httpRequest 00:00:07.967 HttpMethod: GET 00:00:07.967 URL: http://10.211.164.101/packages/spdk_fefe29c8ce882720f8bf13069a4ccb424fc49514.tar.gz 00:00:07.967 Sending request to url: http://10.211.164.101/packages/spdk_fefe29c8ce882720f8bf13069a4ccb424fc49514.tar.gz 00:00:07.974 Response Code: HTTP/1.1 200 OK 00:00:07.975 Success: Status code 200 is in the accepted range: 200,404 00:00:07.975 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_fefe29c8ce882720f8bf13069a4ccb424fc49514.tar.gz 00:00:35.025 [Pipeline] } 00:00:35.043 [Pipeline] // retry 00:00:35.051 [Pipeline] sh 00:00:35.341 + tar --no-same-owner -xf spdk_fefe29c8ce882720f8bf13069a4ccb424fc49514.tar.gz 00:00:37.897 [Pipeline] sh 00:00:38.184 + git -C spdk log --oneline -n5 00:00:38.184 fefe29c8c bdev/nvme: ctrl config consistency check 00:00:38.184 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:38.184 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:38.184 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:38.184 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:00:38.195 [Pipeline] } 00:00:38.210 [Pipeline] // stage 00:00:38.219 [Pipeline] stage 00:00:38.221 [Pipeline] { (Prepare) 00:00:38.235 [Pipeline] writeFile 00:00:38.252 [Pipeline] sh 00:00:38.543 + logger -p user.info -t JENKINS-CI 00:00:38.556 [Pipeline] sh 00:00:38.842 + logger -p user.info -t JENKINS-CI 00:00:38.856 [Pipeline] sh 00:00:39.145 + cat autorun-spdk.conf 00:00:39.146 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.146 SPDK_TEST_NVMF=1 00:00:39.146 SPDK_TEST_NVME_CLI=1 00:00:39.146 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:39.146 SPDK_TEST_NVMF_NICS=e810 00:00:39.146 SPDK_TEST_VFIOUSER=1 00:00:39.146 SPDK_RUN_UBSAN=1 00:00:39.146 NET_TYPE=phy 00:00:39.153 RUN_NIGHTLY=0 00:00:39.158 [Pipeline] readFile 00:00:39.181 [Pipeline] withEnv 00:00:39.183 [Pipeline] { 00:00:39.196 [Pipeline] sh 00:00:39.486 + set -ex 00:00:39.486 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:39.486 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:39.486 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.486 ++ SPDK_TEST_NVMF=1 00:00:39.486 ++ SPDK_TEST_NVME_CLI=1 00:00:39.486 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:39.486 ++ SPDK_TEST_NVMF_NICS=e810 00:00:39.486 ++ SPDK_TEST_VFIOUSER=1 00:00:39.486 ++ SPDK_RUN_UBSAN=1 00:00:39.486 ++ NET_TYPE=phy 00:00:39.486 ++ RUN_NIGHTLY=0 00:00:39.486 + case $SPDK_TEST_NVMF_NICS in 00:00:39.486 + DRIVERS=ice 00:00:39.486 + [[ tcp == \r\d\m\a ]] 00:00:39.486 + [[ -n ice ]] 00:00:39.486 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:39.486 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:39.486 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:39.486 rmmod: ERROR: Module irdma is not currently loaded 00:00:39.486 rmmod: ERROR: Module i40iw is not currently loaded 00:00:39.486 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:39.486 + true 00:00:39.486 + for D in $DRIVERS 00:00:39.486 + sudo modprobe ice 00:00:39.486 + exit 0 00:00:39.495 [Pipeline] } 00:00:39.512 [Pipeline] // withEnv 00:00:39.517 [Pipeline] } 00:00:39.531 [Pipeline] // stage 00:00:39.541 [Pipeline] catchError 00:00:39.543 [Pipeline] { 00:00:39.557 [Pipeline] timeout 00:00:39.557 Timeout set to expire in 1 hr 0 min 00:00:39.559 [Pipeline] { 00:00:39.572 [Pipeline] stage 00:00:39.574 [Pipeline] { (Tests) 00:00:39.587 [Pipeline] sh 00:00:39.875 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.875 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.875 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.875 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:39.875 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:39.875 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:39.875 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:39.875 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:39.875 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:39.875 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:39.875 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:39.875 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.875 + source /etc/os-release 00:00:39.875 ++ NAME='Fedora Linux' 00:00:39.875 ++ VERSION='39 (Cloud Edition)' 00:00:39.875 ++ ID=fedora 00:00:39.875 ++ VERSION_ID=39 00:00:39.875 ++ VERSION_CODENAME= 00:00:39.875 ++ PLATFORM_ID=platform:f39 00:00:39.875 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:39.875 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:39.875 ++ LOGO=fedora-logo-icon 00:00:39.875 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:39.875 ++ HOME_URL=https://fedoraproject.org/ 00:00:39.875 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:39.875 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:39.875 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:39.875 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:39.875 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:39.875 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:39.875 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:39.875 ++ SUPPORT_END=2024-11-12 00:00:39.875 ++ VARIANT='Cloud Edition' 00:00:39.875 ++ VARIANT_ID=cloud 00:00:39.875 + uname -a 00:00:39.875 Linux spdk-cyp-10 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:39.875 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:42.421 Hugepages 00:00:42.421 node hugesize free / total 00:00:42.421 node0 1048576kB 0 / 0 00:00:42.421 node0 2048kB 0 / 0 00:00:42.421 node1 1048576kB 0 / 0 00:00:42.421 node1 2048kB 0 / 0 00:00:42.421 00:00:42.421 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:42.421 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:42.421 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:42.421 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:42.421 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:42.421 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:42.421 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:42.421 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:42.421 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:42.421 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:42.421 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:42.421 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:42.421 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:42.421 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:42.421 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:42.421 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:42.421 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:42.421 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:42.421 + rm -f /tmp/spdk-ld-path 00:00:42.421 + source autorun-spdk.conf 00:00:42.421 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.421 ++ SPDK_TEST_NVMF=1 00:00:42.421 ++ SPDK_TEST_NVME_CLI=1 00:00:42.421 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.421 ++ SPDK_TEST_NVMF_NICS=e810 00:00:42.421 ++ SPDK_TEST_VFIOUSER=1 00:00:42.421 ++ SPDK_RUN_UBSAN=1 00:00:42.421 ++ NET_TYPE=phy 00:00:42.421 ++ RUN_NIGHTLY=0 00:00:42.421 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:42.421 + [[ -n '' ]] 00:00:42.421 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:42.421 + for M in /var/spdk/build-*-manifest.txt 00:00:42.421 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:42.421 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:42.421 + for M in /var/spdk/build-*-manifest.txt 00:00:42.421 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:42.421 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:42.421 + for M in /var/spdk/build-*-manifest.txt 00:00:42.421 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:42.421 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:42.421 ++ uname 00:00:42.421 + [[ Linux == \L\i\n\u\x ]] 00:00:42.421 + sudo dmesg -T 00:00:42.421 + sudo dmesg --clear 00:00:42.682 + dmesg_pid=3636558 00:00:42.682 + [[ Fedora Linux == FreeBSD ]] 00:00:42.682 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.682 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.682 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:42.682 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:42.682 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:42.682 + [[ -x /usr/src/fio-static/fio ]] 00:00:42.682 + export FIO_BIN=/usr/src/fio-static/fio 00:00:42.682 + FIO_BIN=/usr/src/fio-static/fio 00:00:42.682 + sudo dmesg -Tw 00:00:42.682 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:42.682 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:42.682 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:42.682 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.682 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.682 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:42.682 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.682 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.682 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:42.682 Test configuration: 00:00:42.682 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.682 SPDK_TEST_NVMF=1 00:00:42.682 SPDK_TEST_NVME_CLI=1 00:00:42.682 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.682 SPDK_TEST_NVMF_NICS=e810 00:00:42.682 SPDK_TEST_VFIOUSER=1 00:00:42.682 SPDK_RUN_UBSAN=1 00:00:42.682 NET_TYPE=phy 00:00:42.682 RUN_NIGHTLY=0 14:57:52 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:00:42.682 14:57:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:42.682 14:57:52 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:42.682 14:57:52 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:42.682 14:57:52 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:42.682 14:57:52 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:42.682 14:57:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.682 14:57:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.682 14:57:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.682 14:57:52 -- paths/export.sh@5 -- $ export PATH 00:00:42.682 14:57:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.682 14:57:52 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:42.682 14:57:52 -- common/autobuild_common.sh@479 -- $ date +%s 00:00:42.682 14:57:52 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727787472.XXXXXX 00:00:42.682 14:57:52 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727787472.6jt3ub 00:00:42.682 14:57:52 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:00:42.682 14:57:52 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:00:42.682 14:57:52 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:42.682 14:57:52 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:42.683 14:57:52 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:42.683 14:57:52 -- common/autobuild_common.sh@495 -- $ get_config_params 00:00:42.683 14:57:52 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:00:42.683 14:57:52 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.683 14:57:52 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:42.683 14:57:52 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:00:42.683 14:57:52 -- pm/common@17 -- $ local monitor 00:00:42.683 14:57:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.683 14:57:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.683 14:57:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.683 14:57:52 -- pm/common@21 -- $ date +%s 00:00:42.683 14:57:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.683 14:57:52 -- pm/common@21 -- $ date +%s 00:00:42.683 14:57:52 -- pm/common@25 -- $ sleep 1 00:00:42.683 14:57:52 -- pm/common@21 -- $ date +%s 00:00:42.683 14:57:52 -- pm/common@21 -- $ date +%s 00:00:42.683 14:57:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727787472 00:00:42.683 14:57:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727787472 00:00:42.683 14:57:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727787472 00:00:42.683 14:57:52 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727787472 00:00:42.683 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727787472_collect-vmstat.pm.log 00:00:42.683 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727787472_collect-cpu-load.pm.log 00:00:42.683 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727787472_collect-cpu-temp.pm.log 00:00:42.683 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727787472_collect-bmc-pm.bmc.pm.log 00:00:43.624 14:57:53 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:00:43.624 14:57:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:43.624 14:57:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:43.624 14:57:53 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:43.624 14:57:53 -- spdk/autobuild.sh@16 -- $ date -u 00:00:43.624 Tue Oct 1 12:57:53 PM UTC 2024 00:00:43.624 14:57:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:43.624 v25.01-pre-18-gfefe29c8c 00:00:43.624 14:57:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:43.624 14:57:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:43.624 14:57:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:43.624 14:57:53 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:43.624 14:57:53 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:43.624 14:57:53 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.885 ************************************ 00:00:43.885 START TEST ubsan 00:00:43.885 ************************************ 00:00:43.885 14:57:53 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:43.885 using ubsan 00:00:43.885 00:00:43.885 real 0m0.000s 00:00:43.885 user 0m0.000s 00:00:43.885 sys 0m0.000s 00:00:43.885 14:57:53 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:43.885 14:57:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:43.885 ************************************ 00:00:43.885 END TEST ubsan 00:00:43.885 ************************************ 00:00:43.885 14:57:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:43.885 14:57:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:43.885 14:57:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:43.885 14:57:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:43.885 14:57:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:43.885 14:57:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:43.885 14:57:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:43.885 14:57:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:43.885 14:57:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:43.885 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:43.885 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:44.456 Using 'verbs' RDMA provider 00:01:00.303 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:12.529 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:12.529 Creating mk/config.mk...done. 00:01:12.529 Creating mk/cc.flags.mk...done. 00:01:12.529 Type 'make' to build. 00:01:12.529 14:58:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:12.529 14:58:21 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:12.529 14:58:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:12.529 14:58:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.529 ************************************ 00:01:12.529 START TEST make 00:01:12.529 ************************************ 00:01:12.530 14:58:21 make -- common/autotest_common.sh@1125 -- $ make -j144 00:01:12.530 make[1]: Nothing to be done for 'all'. 00:01:13.912 The Meson build system 00:01:13.912 Version: 1.5.0 00:01:13.912 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:13.912 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:13.912 Build type: native build 00:01:13.912 Project name: libvfio-user 00:01:13.912 Project version: 0.0.1 00:01:13.912 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:13.912 C linker for the host machine: cc ld.bfd 2.40-14 00:01:13.912 Host machine cpu family: x86_64 00:01:13.912 Host machine cpu: x86_64 00:01:13.912 Run-time dependency threads found: YES 00:01:13.912 Library dl found: YES 00:01:13.912 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:13.912 Run-time dependency json-c found: YES 0.17 00:01:13.912 Run-time dependency cmocka found: YES 1.1.7 00:01:13.912 Program pytest-3 found: NO 00:01:13.912 Program flake8 found: NO 00:01:13.912 Program misspell-fixer found: NO 00:01:13.912 Program restructuredtext-lint found: NO 00:01:13.912 Program valgrind found: YES (/usr/bin/valgrind) 00:01:13.912 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:13.912 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:13.912 Compiler for C supports arguments -Wwrite-strings: YES 00:01:13.912 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:13.912 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:13.912 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:13.912 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:13.912 Build targets in project: 8 00:01:13.912 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:13.912 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:13.912 00:01:13.912 libvfio-user 0.0.1 00:01:13.912 00:01:13.912 User defined options 00:01:13.912 buildtype : debug 00:01:13.912 default_library: shared 00:01:13.912 libdir : /usr/local/lib 00:01:13.912 00:01:13.912 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:14.171 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:14.171 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:14.482 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:14.482 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:14.482 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:14.482 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:14.482 [6/37] Compiling C object samples/null.p/null.c.o 00:01:14.482 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:14.482 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:14.482 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:14.482 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:14.482 [11/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:14.482 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:14.482 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:14.482 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:14.482 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:14.482 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:14.482 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:14.482 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:14.482 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:14.482 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:14.482 [21/37] Compiling C object samples/server.p/server.c.o 00:01:14.482 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:14.482 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:14.482 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:14.482 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:14.482 [26/37] Compiling C object samples/client.p/client.c.o 00:01:14.482 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:14.482 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:14.482 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:14.482 [30/37] Linking target samples/client 00:01:14.482 [31/37] Linking target test/unit_tests 00:01:14.752 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:14.752 [33/37] Linking target samples/server 00:01:14.752 [34/37] Linking target samples/lspci 00:01:14.752 [35/37] Linking target samples/null 00:01:14.752 [36/37] Linking target samples/gpio-pci-idio-16 00:01:14.752 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:14.752 INFO: autodetecting backend as ninja 00:01:14.752 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:14.752 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:15.014 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:15.014 ninja: no work to do. 00:01:21.605 The Meson build system 00:01:21.605 Version: 1.5.0 00:01:21.605 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:21.605 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:21.605 Build type: native build 00:01:21.605 Program cat found: YES (/usr/bin/cat) 00:01:21.605 Project name: DPDK 00:01:21.605 Project version: 24.03.0 00:01:21.606 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:21.606 C linker for the host machine: cc ld.bfd 2.40-14 00:01:21.606 Host machine cpu family: x86_64 00:01:21.606 Host machine cpu: x86_64 00:01:21.606 Message: ## Building in Developer Mode ## 00:01:21.606 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:21.606 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:21.606 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:21.606 Program python3 found: YES (/usr/bin/python3) 00:01:21.606 Program cat found: YES (/usr/bin/cat) 00:01:21.606 Compiler for C supports arguments -march=native: YES 00:01:21.606 Checking for size of "void *" : 8 00:01:21.606 Checking for size of "void *" : 8 (cached) 00:01:21.606 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:21.606 Library m found: YES 00:01:21.606 Library numa found: YES 00:01:21.606 Has header "numaif.h" : YES 00:01:21.606 Library fdt found: NO 00:01:21.606 Library execinfo found: NO 00:01:21.606 Has header "execinfo.h" : YES 00:01:21.606 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:21.606 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:21.606 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:21.606 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:21.606 Run-time dependency openssl found: YES 3.1.1 00:01:21.606 Run-time dependency libpcap found: YES 1.10.4 00:01:21.606 Has header "pcap.h" with dependency libpcap: YES 00:01:21.606 Compiler for C supports arguments -Wcast-qual: YES 00:01:21.606 Compiler for C supports arguments -Wdeprecated: YES 00:01:21.606 Compiler for C supports arguments -Wformat: YES 00:01:21.606 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:21.606 Compiler for C supports arguments -Wformat-security: NO 00:01:21.606 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:21.606 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:21.606 Compiler for C supports arguments -Wnested-externs: YES 00:01:21.606 Compiler for C supports arguments -Wold-style-definition: YES 00:01:21.606 Compiler for C supports arguments -Wpointer-arith: YES 00:01:21.606 Compiler for C supports arguments -Wsign-compare: YES 00:01:21.606 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:21.606 Compiler for C supports arguments -Wundef: YES 00:01:21.606 Compiler for C supports arguments -Wwrite-strings: YES 00:01:21.606 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:21.606 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:21.606 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:21.606 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:21.606 Program objdump found: YES (/usr/bin/objdump) 00:01:21.606 Compiler for C supports arguments -mavx512f: YES 00:01:21.606 Checking if "AVX512 checking" compiles: YES 00:01:21.606 Fetching value of define "__SSE4_2__" : 1 00:01:21.606 Fetching value of define "__AES__" : 1 00:01:21.606 Fetching value of define "__AVX__" : 1 00:01:21.606 Fetching value of define "__AVX2__" : 1 00:01:21.606 Fetching value of define "__AVX512BW__" : 1 00:01:21.606 Fetching value of define "__AVX512CD__" : 1 00:01:21.606 Fetching value of define "__AVX512DQ__" : 1 00:01:21.606 Fetching value of define "__AVX512F__" : 1 00:01:21.606 Fetching value of define "__AVX512VL__" : 1 00:01:21.606 Fetching value of define "__PCLMUL__" : 1 00:01:21.606 Fetching value of define "__RDRND__" : 1 00:01:21.606 Fetching value of define "__RDSEED__" : 1 00:01:21.606 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:21.606 Fetching value of define "__znver1__" : (undefined) 00:01:21.606 Fetching value of define "__znver2__" : (undefined) 00:01:21.606 Fetching value of define "__znver3__" : (undefined) 00:01:21.606 Fetching value of define "__znver4__" : (undefined) 00:01:21.606 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:21.606 Message: lib/log: Defining dependency "log" 00:01:21.606 Message: lib/kvargs: Defining dependency "kvargs" 00:01:21.606 Message: lib/telemetry: Defining dependency "telemetry" 00:01:21.606 Checking for function "getentropy" : NO 00:01:21.606 Message: lib/eal: Defining dependency "eal" 00:01:21.606 Message: lib/ring: Defining dependency "ring" 00:01:21.606 Message: lib/rcu: Defining dependency "rcu" 00:01:21.606 Message: lib/mempool: Defining dependency "mempool" 00:01:21.606 Message: lib/mbuf: Defining dependency "mbuf" 00:01:21.606 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:21.606 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:21.606 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:21.606 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:21.606 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:21.606 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:21.606 Compiler for C supports arguments -mpclmul: YES 00:01:21.606 Compiler for C supports arguments -maes: YES 00:01:21.606 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:21.606 Compiler for C supports arguments -mavx512bw: YES 00:01:21.606 Compiler for C supports arguments -mavx512dq: YES 00:01:21.606 Compiler for C supports arguments -mavx512vl: YES 00:01:21.606 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:21.606 Compiler for C supports arguments -mavx2: YES 00:01:21.606 Compiler for C supports arguments -mavx: YES 00:01:21.606 Message: lib/net: Defining dependency "net" 00:01:21.606 Message: lib/meter: Defining dependency "meter" 00:01:21.606 Message: lib/ethdev: Defining dependency "ethdev" 00:01:21.606 Message: lib/pci: Defining dependency "pci" 00:01:21.606 Message: lib/cmdline: Defining dependency "cmdline" 00:01:21.606 Message: lib/hash: Defining dependency "hash" 00:01:21.606 Message: lib/timer: Defining dependency "timer" 00:01:21.606 Message: lib/compressdev: Defining dependency "compressdev" 00:01:21.606 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:21.606 Message: lib/dmadev: Defining dependency "dmadev" 00:01:21.606 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:21.606 Message: lib/power: Defining dependency "power" 00:01:21.606 Message: lib/reorder: Defining dependency "reorder" 00:01:21.606 Message: lib/security: Defining dependency "security" 00:01:21.606 Has header "linux/userfaultfd.h" : YES 00:01:21.606 Has header "linux/vduse.h" : YES 00:01:21.606 Message: lib/vhost: Defining dependency "vhost" 00:01:21.606 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:21.606 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:21.606 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:21.606 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:21.606 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:21.606 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:21.606 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:21.606 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:21.606 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:21.606 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:21.606 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:21.606 Configuring doxy-api-html.conf using configuration 00:01:21.606 Configuring doxy-api-man.conf using configuration 00:01:21.606 Program mandb found: YES (/usr/bin/mandb) 00:01:21.606 Program sphinx-build found: NO 00:01:21.606 Configuring rte_build_config.h using configuration 00:01:21.606 Message: 00:01:21.606 ================= 00:01:21.606 Applications Enabled 00:01:21.606 ================= 00:01:21.606 00:01:21.606 apps: 00:01:21.606 00:01:21.606 00:01:21.606 Message: 00:01:21.606 ================= 00:01:21.606 Libraries Enabled 00:01:21.606 ================= 00:01:21.606 00:01:21.606 libs: 00:01:21.606 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:21.606 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:21.606 cryptodev, dmadev, power, reorder, security, vhost, 00:01:21.606 00:01:21.606 Message: 00:01:21.606 =============== 00:01:21.606 Drivers Enabled 00:01:21.606 =============== 00:01:21.606 00:01:21.606 common: 00:01:21.606 00:01:21.606 bus: 00:01:21.606 pci, vdev, 00:01:21.606 mempool: 00:01:21.606 ring, 00:01:21.606 dma: 00:01:21.606 00:01:21.606 net: 00:01:21.606 00:01:21.606 crypto: 00:01:21.606 00:01:21.606 compress: 00:01:21.606 00:01:21.606 vdpa: 00:01:21.606 00:01:21.606 00:01:21.606 Message: 00:01:21.606 ================= 00:01:21.606 Content Skipped 00:01:21.606 ================= 00:01:21.606 00:01:21.606 apps: 00:01:21.606 dumpcap: explicitly disabled via build config 00:01:21.606 graph: explicitly disabled via build config 00:01:21.606 pdump: explicitly disabled via build config 00:01:21.606 proc-info: explicitly disabled via build config 00:01:21.606 test-acl: explicitly disabled via build config 00:01:21.606 test-bbdev: explicitly disabled via build config 00:01:21.606 test-cmdline: explicitly disabled via build config 00:01:21.606 test-compress-perf: explicitly disabled via build config 00:01:21.606 test-crypto-perf: explicitly disabled via build config 00:01:21.606 test-dma-perf: explicitly disabled via build config 00:01:21.606 test-eventdev: explicitly disabled via build config 00:01:21.606 test-fib: explicitly disabled via build config 00:01:21.606 test-flow-perf: explicitly disabled via build config 00:01:21.606 test-gpudev: explicitly disabled via build config 00:01:21.606 test-mldev: explicitly disabled via build config 00:01:21.606 test-pipeline: explicitly disabled via build config 00:01:21.606 test-pmd: explicitly disabled via build config 00:01:21.606 test-regex: explicitly disabled via build config 00:01:21.606 test-sad: explicitly disabled via build config 00:01:21.606 test-security-perf: explicitly disabled via build config 00:01:21.606 00:01:21.606 libs: 00:01:21.606 argparse: explicitly disabled via build config 00:01:21.606 metrics: explicitly disabled via build config 00:01:21.606 acl: explicitly disabled via build config 00:01:21.606 bbdev: explicitly disabled via build config 00:01:21.606 bitratestats: explicitly disabled via build config 00:01:21.606 bpf: explicitly disabled via build config 00:01:21.606 cfgfile: explicitly disabled via build config 00:01:21.606 distributor: explicitly disabled via build config 00:01:21.606 efd: explicitly disabled via build config 00:01:21.606 eventdev: explicitly disabled via build config 00:01:21.607 dispatcher: explicitly disabled via build config 00:01:21.607 gpudev: explicitly disabled via build config 00:01:21.607 gro: explicitly disabled via build config 00:01:21.607 gso: explicitly disabled via build config 00:01:21.607 ip_frag: explicitly disabled via build config 00:01:21.607 jobstats: explicitly disabled via build config 00:01:21.607 latencystats: explicitly disabled via build config 00:01:21.607 lpm: explicitly disabled via build config 00:01:21.607 member: explicitly disabled via build config 00:01:21.607 pcapng: explicitly disabled via build config 00:01:21.607 rawdev: explicitly disabled via build config 00:01:21.607 regexdev: explicitly disabled via build config 00:01:21.607 mldev: explicitly disabled via build config 00:01:21.607 rib: explicitly disabled via build config 00:01:21.607 sched: explicitly disabled via build config 00:01:21.607 stack: explicitly disabled via build config 00:01:21.607 ipsec: explicitly disabled via build config 00:01:21.607 pdcp: explicitly disabled via build config 00:01:21.607 fib: explicitly disabled via build config 00:01:21.607 port: explicitly disabled via build config 00:01:21.607 pdump: explicitly disabled via build config 00:01:21.607 table: explicitly disabled via build config 00:01:21.607 pipeline: explicitly disabled via build config 00:01:21.607 graph: explicitly disabled via build config 00:01:21.607 node: explicitly disabled via build config 00:01:21.607 00:01:21.607 drivers: 00:01:21.607 common/cpt: not in enabled drivers build config 00:01:21.607 common/dpaax: not in enabled drivers build config 00:01:21.607 common/iavf: not in enabled drivers build config 00:01:21.607 common/idpf: not in enabled drivers build config 00:01:21.607 common/ionic: not in enabled drivers build config 00:01:21.607 common/mvep: not in enabled drivers build config 00:01:21.607 common/octeontx: not in enabled drivers build config 00:01:21.607 bus/auxiliary: not in enabled drivers build config 00:01:21.607 bus/cdx: not in enabled drivers build config 00:01:21.607 bus/dpaa: not in enabled drivers build config 00:01:21.607 bus/fslmc: not in enabled drivers build config 00:01:21.607 bus/ifpga: not in enabled drivers build config 00:01:21.607 bus/platform: not in enabled drivers build config 00:01:21.607 bus/uacce: not in enabled drivers build config 00:01:21.607 bus/vmbus: not in enabled drivers build config 00:01:21.607 common/cnxk: not in enabled drivers build config 00:01:21.607 common/mlx5: not in enabled drivers build config 00:01:21.607 common/nfp: not in enabled drivers build config 00:01:21.607 common/nitrox: not in enabled drivers build config 00:01:21.607 common/qat: not in enabled drivers build config 00:01:21.607 common/sfc_efx: not in enabled drivers build config 00:01:21.607 mempool/bucket: not in enabled drivers build config 00:01:21.607 mempool/cnxk: not in enabled drivers build config 00:01:21.607 mempool/dpaa: not in enabled drivers build config 00:01:21.607 mempool/dpaa2: not in enabled drivers build config 00:01:21.607 mempool/octeontx: not in enabled drivers build config 00:01:21.607 mempool/stack: not in enabled drivers build config 00:01:21.607 dma/cnxk: not in enabled drivers build config 00:01:21.607 dma/dpaa: not in enabled drivers build config 00:01:21.607 dma/dpaa2: not in enabled drivers build config 00:01:21.607 dma/hisilicon: not in enabled drivers build config 00:01:21.607 dma/idxd: not in enabled drivers build config 00:01:21.607 dma/ioat: not in enabled drivers build config 00:01:21.607 dma/skeleton: not in enabled drivers build config 00:01:21.607 net/af_packet: not in enabled drivers build config 00:01:21.607 net/af_xdp: not in enabled drivers build config 00:01:21.607 net/ark: not in enabled drivers build config 00:01:21.607 net/atlantic: not in enabled drivers build config 00:01:21.607 net/avp: not in enabled drivers build config 00:01:21.607 net/axgbe: not in enabled drivers build config 00:01:21.607 net/bnx2x: not in enabled drivers build config 00:01:21.607 net/bnxt: not in enabled drivers build config 00:01:21.607 net/bonding: not in enabled drivers build config 00:01:21.607 net/cnxk: not in enabled drivers build config 00:01:21.607 net/cpfl: not in enabled drivers build config 00:01:21.607 net/cxgbe: not in enabled drivers build config 00:01:21.607 net/dpaa: not in enabled drivers build config 00:01:21.607 net/dpaa2: not in enabled drivers build config 00:01:21.607 net/e1000: not in enabled drivers build config 00:01:21.607 net/ena: not in enabled drivers build config 00:01:21.607 net/enetc: not in enabled drivers build config 00:01:21.607 net/enetfec: not in enabled drivers build config 00:01:21.607 net/enic: not in enabled drivers build config 00:01:21.607 net/failsafe: not in enabled drivers build config 00:01:21.607 net/fm10k: not in enabled drivers build config 00:01:21.607 net/gve: not in enabled drivers build config 00:01:21.607 net/hinic: not in enabled drivers build config 00:01:21.607 net/hns3: not in enabled drivers build config 00:01:21.607 net/i40e: not in enabled drivers build config 00:01:21.607 net/iavf: not in enabled drivers build config 00:01:21.607 net/ice: not in enabled drivers build config 00:01:21.607 net/idpf: not in enabled drivers build config 00:01:21.607 net/igc: not in enabled drivers build config 00:01:21.607 net/ionic: not in enabled drivers build config 00:01:21.607 net/ipn3ke: not in enabled drivers build config 00:01:21.607 net/ixgbe: not in enabled drivers build config 00:01:21.607 net/mana: not in enabled drivers build config 00:01:21.607 net/memif: not in enabled drivers build config 00:01:21.607 net/mlx4: not in enabled drivers build config 00:01:21.607 net/mlx5: not in enabled drivers build config 00:01:21.607 net/mvneta: not in enabled drivers build config 00:01:21.607 net/mvpp2: not in enabled drivers build config 00:01:21.607 net/netvsc: not in enabled drivers build config 00:01:21.607 net/nfb: not in enabled drivers build config 00:01:21.607 net/nfp: not in enabled drivers build config 00:01:21.607 net/ngbe: not in enabled drivers build config 00:01:21.607 net/null: not in enabled drivers build config 00:01:21.607 net/octeontx: not in enabled drivers build config 00:01:21.607 net/octeon_ep: not in enabled drivers build config 00:01:21.607 net/pcap: not in enabled drivers build config 00:01:21.607 net/pfe: not in enabled drivers build config 00:01:21.607 net/qede: not in enabled drivers build config 00:01:21.607 net/ring: not in enabled drivers build config 00:01:21.607 net/sfc: not in enabled drivers build config 00:01:21.607 net/softnic: not in enabled drivers build config 00:01:21.607 net/tap: not in enabled drivers build config 00:01:21.607 net/thunderx: not in enabled drivers build config 00:01:21.607 net/txgbe: not in enabled drivers build config 00:01:21.607 net/vdev_netvsc: not in enabled drivers build config 00:01:21.607 net/vhost: not in enabled drivers build config 00:01:21.607 net/virtio: not in enabled drivers build config 00:01:21.607 net/vmxnet3: not in enabled drivers build config 00:01:21.607 raw/*: missing internal dependency, "rawdev" 00:01:21.607 crypto/armv8: not in enabled drivers build config 00:01:21.607 crypto/bcmfs: not in enabled drivers build config 00:01:21.607 crypto/caam_jr: not in enabled drivers build config 00:01:21.607 crypto/ccp: not in enabled drivers build config 00:01:21.607 crypto/cnxk: not in enabled drivers build config 00:01:21.607 crypto/dpaa_sec: not in enabled drivers build config 00:01:21.607 crypto/dpaa2_sec: not in enabled drivers build config 00:01:21.607 crypto/ipsec_mb: not in enabled drivers build config 00:01:21.607 crypto/mlx5: not in enabled drivers build config 00:01:21.607 crypto/mvsam: not in enabled drivers build config 00:01:21.607 crypto/nitrox: not in enabled drivers build config 00:01:21.607 crypto/null: not in enabled drivers build config 00:01:21.607 crypto/octeontx: not in enabled drivers build config 00:01:21.607 crypto/openssl: not in enabled drivers build config 00:01:21.607 crypto/scheduler: not in enabled drivers build config 00:01:21.607 crypto/uadk: not in enabled drivers build config 00:01:21.607 crypto/virtio: not in enabled drivers build config 00:01:21.607 compress/isal: not in enabled drivers build config 00:01:21.607 compress/mlx5: not in enabled drivers build config 00:01:21.607 compress/nitrox: not in enabled drivers build config 00:01:21.607 compress/octeontx: not in enabled drivers build config 00:01:21.607 compress/zlib: not in enabled drivers build config 00:01:21.607 regex/*: missing internal dependency, "regexdev" 00:01:21.607 ml/*: missing internal dependency, "mldev" 00:01:21.607 vdpa/ifc: not in enabled drivers build config 00:01:21.607 vdpa/mlx5: not in enabled drivers build config 00:01:21.607 vdpa/nfp: not in enabled drivers build config 00:01:21.607 vdpa/sfc: not in enabled drivers build config 00:01:21.607 event/*: missing internal dependency, "eventdev" 00:01:21.607 baseband/*: missing internal dependency, "bbdev" 00:01:21.607 gpu/*: missing internal dependency, "gpudev" 00:01:21.607 00:01:21.607 00:01:21.607 Build targets in project: 84 00:01:21.607 00:01:21.607 DPDK 24.03.0 00:01:21.607 00:01:21.607 User defined options 00:01:21.607 buildtype : debug 00:01:21.607 default_library : shared 00:01:21.607 libdir : lib 00:01:21.607 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:21.607 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:21.607 c_link_args : 00:01:21.607 cpu_instruction_set: native 00:01:21.607 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:21.607 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:21.607 enable_docs : false 00:01:21.607 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:21.607 enable_kmods : false 00:01:21.607 max_lcores : 128 00:01:21.607 tests : false 00:01:21.607 00:01:21.607 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:21.607 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:21.882 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:21.882 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:21.882 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:21.882 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:21.882 [5/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:21.882 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:21.882 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:21.882 [8/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:21.882 [9/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:21.882 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:21.882 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:21.882 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:21.882 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:21.882 [14/267] Linking static target lib/librte_kvargs.a 00:01:22.147 [15/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:22.147 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:22.147 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:22.147 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:22.147 [19/267] Linking static target lib/librte_log.a 00:01:22.147 [20/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:22.147 [21/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:22.147 [22/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:22.147 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:22.147 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:22.147 [25/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:22.147 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:22.147 [27/267] Linking static target lib/librte_pci.a 00:01:22.147 [28/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:22.147 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:22.147 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:22.147 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:22.147 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:22.147 [33/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:22.147 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:22.147 [35/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:22.408 [36/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:22.408 [37/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:22.408 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:22.408 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:22.408 [40/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:22.408 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:22.408 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:22.408 [43/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:22.408 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:22.408 [45/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.408 [46/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:22.408 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:22.408 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:22.408 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:22.408 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:22.408 [51/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:22.408 [52/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:22.408 [53/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:22.408 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:22.408 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:22.408 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:22.408 [57/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:22.408 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:22.408 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:22.408 [60/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.408 [61/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:22.408 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:22.408 [63/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:22.408 [64/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:22.408 [65/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:22.408 [66/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:22.408 [67/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:22.408 [68/267] Linking static target lib/librte_meter.a 00:01:22.408 [69/267] Linking static target lib/librte_telemetry.a 00:01:22.408 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:22.408 [71/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:22.672 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:22.672 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:22.672 [74/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:22.672 [75/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:22.672 [76/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:22.672 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:22.672 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:22.672 [79/267] Linking static target lib/librte_timer.a 00:01:22.672 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:22.672 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:22.672 [82/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:22.672 [83/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:22.672 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:22.672 [85/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:22.672 [86/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:22.672 [87/267] Linking static target lib/librte_ring.a 00:01:22.672 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:22.672 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:22.672 [90/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:22.672 [91/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:22.672 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:22.672 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:22.672 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:22.672 [95/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:22.672 [96/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:22.672 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:22.672 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:22.672 [99/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:22.672 [100/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:22.672 [101/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:22.672 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:22.672 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:22.672 [104/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:22.672 [105/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:22.672 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:22.672 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:22.672 [108/267] Linking static target lib/librte_cmdline.a 00:01:22.672 [109/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:22.672 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:22.672 [111/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:22.672 [112/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:22.672 [113/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:22.672 [114/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:22.672 [115/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:22.672 [116/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:22.672 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:22.672 [118/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:22.672 [119/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:22.672 [120/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:22.672 [121/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:22.672 [122/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:22.673 [123/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.673 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:22.673 [125/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:22.673 [126/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:22.673 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:22.673 [128/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:22.673 [129/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:22.673 [130/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:22.673 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:22.673 [132/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:22.673 [133/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:22.673 [134/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:22.673 [135/267] Linking target lib/librte_log.so.24.1 00:01:22.673 [136/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:22.673 [137/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:22.673 [138/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:22.673 [139/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:22.673 [140/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:22.673 [141/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:22.673 [142/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:22.673 [143/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:22.673 [144/267] Linking static target lib/librte_net.a 00:01:22.673 [145/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:22.673 [146/267] Linking static target lib/librte_mempool.a 00:01:22.673 [147/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:22.673 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:22.673 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:22.673 [150/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:22.673 [151/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:22.673 [152/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:22.673 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:22.673 [154/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:22.673 [155/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:22.673 [156/267] Linking static target lib/librte_dmadev.a 00:01:22.673 [157/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:22.673 [158/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:22.673 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:22.673 [160/267] Linking static target lib/librte_power.a 00:01:22.673 [161/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:22.673 [162/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:22.673 [163/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.673 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:22.673 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:22.673 [166/267] Linking static target lib/librte_rcu.a 00:01:22.673 [167/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.673 [168/267] Linking static target lib/librte_compressdev.a 00:01:22.673 [169/267] Linking static target drivers/librte_bus_vdev.a 00:01:22.673 [170/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.673 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:22.673 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:22.934 [173/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:22.934 [174/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:22.934 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:22.934 [176/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:22.934 [177/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:22.934 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:22.934 [179/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:22.934 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:22.934 [181/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:22.934 [182/267] Linking target lib/librte_kvargs.so.24.1 00:01:22.934 [183/267] Linking static target lib/librte_eal.a 00:01:22.934 [184/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:22.934 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:22.934 [186/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:22.934 [187/267] Linking static target lib/librte_reorder.a 00:01:22.934 [188/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.934 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:22.934 [190/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.934 [191/267] Linking static target lib/librte_mbuf.a 00:01:22.934 [192/267] Linking static target lib/librte_security.a 00:01:22.934 [193/267] Linking static target drivers/librte_mempool_ring.a 00:01:22.934 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:22.934 [195/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.934 [196/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.934 [197/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:22.934 [198/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:22.934 [199/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:22.934 [200/267] Linking static target lib/librte_hash.a 00:01:22.934 [201/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.195 [202/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:23.195 [203/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.195 [204/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:23.195 [205/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:23.195 [206/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:23.195 [207/267] Linking static target drivers/librte_bus_pci.a 00:01:23.195 [208/267] Linking target lib/librte_telemetry.so.24.1 00:01:23.195 [209/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.195 [210/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.195 [211/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:23.195 [212/267] Linking static target lib/librte_cryptodev.a 00:01:23.195 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:23.456 [214/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.456 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.456 [216/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:23.456 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.718 [218/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.718 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:23.718 [220/267] Linking static target lib/librte_ethdev.a 00:01:23.718 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.718 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.718 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.979 [224/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.979 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.979 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.551 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:24.551 [228/267] Linking static target lib/librte_vhost.a 00:01:25.495 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.436 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.018 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.402 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.402 [233/267] Linking target lib/librte_eal.so.24.1 00:01:34.402 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:34.402 [235/267] Linking target lib/librte_ring.so.24.1 00:01:34.402 [236/267] Linking target lib/librte_timer.so.24.1 00:01:34.402 [237/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:34.402 [238/267] Linking target lib/librte_pci.so.24.1 00:01:34.402 [239/267] Linking target lib/librte_meter.so.24.1 00:01:34.402 [240/267] Linking target lib/librte_dmadev.so.24.1 00:01:34.663 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:34.663 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:34.663 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:34.663 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:34.663 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:34.663 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:34.663 [247/267] Linking target lib/librte_mempool.so.24.1 00:01:34.663 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:34.663 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:34.923 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:34.923 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:34.923 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:34.923 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:34.923 [254/267] Linking target lib/librte_compressdev.so.24.1 00:01:34.923 [255/267] Linking target lib/librte_net.so.24.1 00:01:34.923 [256/267] Linking target lib/librte_cryptodev.so.24.1 00:01:34.923 [257/267] Linking target lib/librte_reorder.so.24.1 00:01:35.184 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:35.184 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:35.184 [260/267] Linking target lib/librte_hash.so.24.1 00:01:35.184 [261/267] Linking target lib/librte_cmdline.so.24.1 00:01:35.184 [262/267] Linking target lib/librte_security.so.24.1 00:01:35.184 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:35.184 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:35.444 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:35.444 [266/267] Linking target lib/librte_power.so.24.1 00:01:35.444 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:35.444 INFO: autodetecting backend as ninja 00:01:35.444 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:38.744 CC lib/ut/ut.o 00:01:38.744 CC lib/ut_mock/mock.o 00:01:38.744 CC lib/log/log.o 00:01:38.744 CC lib/log/log_flags.o 00:01:38.744 CC lib/log/log_deprecated.o 00:01:39.004 LIB libspdk_ut.a 00:01:39.004 LIB libspdk_ut_mock.a 00:01:39.004 LIB libspdk_log.a 00:01:39.004 SO libspdk_ut.so.2.0 00:01:39.004 SO libspdk_ut_mock.so.6.0 00:01:39.004 SO libspdk_log.so.7.0 00:01:39.004 SYMLINK libspdk_ut.so 00:01:39.004 SYMLINK libspdk_ut_mock.so 00:01:39.004 SYMLINK libspdk_log.so 00:01:39.575 CC lib/dma/dma.o 00:01:39.575 CC lib/util/bit_array.o 00:01:39.575 CC lib/util/base64.o 00:01:39.575 CC lib/ioat/ioat.o 00:01:39.575 CC lib/util/cpuset.o 00:01:39.575 CC lib/util/crc16.o 00:01:39.575 CC lib/util/crc32.o 00:01:39.575 CC lib/util/crc32c.o 00:01:39.576 CXX lib/trace_parser/trace.o 00:01:39.576 CC lib/util/crc32_ieee.o 00:01:39.576 CC lib/util/crc64.o 00:01:39.576 CC lib/util/dif.o 00:01:39.576 CC lib/util/fd.o 00:01:39.576 CC lib/util/fd_group.o 00:01:39.576 CC lib/util/file.o 00:01:39.576 CC lib/util/hexlify.o 00:01:39.576 CC lib/util/iov.o 00:01:39.576 CC lib/util/math.o 00:01:39.576 CC lib/util/net.o 00:01:39.576 CC lib/util/pipe.o 00:01:39.576 CC lib/util/strerror_tls.o 00:01:39.576 CC lib/util/string.o 00:01:39.576 CC lib/util/uuid.o 00:01:39.576 CC lib/util/xor.o 00:01:39.576 CC lib/util/zipf.o 00:01:39.576 CC lib/util/md5.o 00:01:39.576 CC lib/vfio_user/host/vfio_user_pci.o 00:01:39.576 CC lib/vfio_user/host/vfio_user.o 00:01:39.576 LIB libspdk_dma.a 00:01:39.576 SO libspdk_dma.so.5.0 00:01:39.836 LIB libspdk_ioat.a 00:01:39.836 SYMLINK libspdk_dma.so 00:01:39.836 SO libspdk_ioat.so.7.0 00:01:39.836 SYMLINK libspdk_ioat.so 00:01:39.836 LIB libspdk_vfio_user.a 00:01:39.836 SO libspdk_vfio_user.so.5.0 00:01:39.836 LIB libspdk_util.a 00:01:39.836 SYMLINK libspdk_vfio_user.so 00:01:40.096 SO libspdk_util.so.10.0 00:01:40.096 LIB libspdk_trace_parser.a 00:01:40.096 SO libspdk_trace_parser.so.6.0 00:01:40.096 SYMLINK libspdk_util.so 00:01:40.096 SYMLINK libspdk_trace_parser.so 00:01:40.358 CC lib/env_dpdk/env.o 00:01:40.358 CC lib/json/json_parse.o 00:01:40.358 CC lib/conf/conf.o 00:01:40.358 CC lib/env_dpdk/memory.o 00:01:40.358 CC lib/json/json_util.o 00:01:40.358 CC lib/env_dpdk/pci.o 00:01:40.358 CC lib/env_dpdk/init.o 00:01:40.358 CC lib/env_dpdk/threads.o 00:01:40.358 CC lib/json/json_write.o 00:01:40.358 CC lib/env_dpdk/pci_ioat.o 00:01:40.358 CC lib/rdma_provider/common.o 00:01:40.358 CC lib/env_dpdk/pci_virtio.o 00:01:40.358 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:40.358 CC lib/env_dpdk/pci_vmd.o 00:01:40.358 CC lib/env_dpdk/pci_idxd.o 00:01:40.358 CC lib/rdma_utils/rdma_utils.o 00:01:40.358 CC lib/vmd/vmd.o 00:01:40.358 CC lib/env_dpdk/pci_event.o 00:01:40.358 CC lib/env_dpdk/sigbus_handler.o 00:01:40.358 CC lib/vmd/led.o 00:01:40.358 CC lib/env_dpdk/pci_dpdk.o 00:01:40.358 CC lib/idxd/idxd.o 00:01:40.358 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:40.358 CC lib/idxd/idxd_user.o 00:01:40.358 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:40.358 CC lib/idxd/idxd_kernel.o 00:01:40.617 LIB libspdk_rdma_provider.a 00:01:40.617 LIB libspdk_conf.a 00:01:40.617 SO libspdk_rdma_provider.so.6.0 00:01:40.617 SO libspdk_conf.so.6.0 00:01:40.878 LIB libspdk_rdma_utils.a 00:01:40.878 LIB libspdk_json.a 00:01:40.878 SYMLINK libspdk_rdma_provider.so 00:01:40.878 SO libspdk_json.so.6.0 00:01:40.878 SO libspdk_rdma_utils.so.1.0 00:01:40.878 SYMLINK libspdk_conf.so 00:01:40.878 SYMLINK libspdk_rdma_utils.so 00:01:40.878 SYMLINK libspdk_json.so 00:01:41.138 LIB libspdk_idxd.a 00:01:41.138 SO libspdk_idxd.so.12.1 00:01:41.138 LIB libspdk_vmd.a 00:01:41.138 SO libspdk_vmd.so.6.0 00:01:41.138 SYMLINK libspdk_idxd.so 00:01:41.138 SYMLINK libspdk_vmd.so 00:01:41.138 CC lib/jsonrpc/jsonrpc_server.o 00:01:41.138 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:41.138 CC lib/jsonrpc/jsonrpc_client.o 00:01:41.138 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:41.398 LIB libspdk_jsonrpc.a 00:01:41.657 SO libspdk_jsonrpc.so.6.0 00:01:41.657 SYMLINK libspdk_jsonrpc.so 00:01:41.657 LIB libspdk_env_dpdk.a 00:01:41.657 SO libspdk_env_dpdk.so.15.0 00:01:41.917 SYMLINK libspdk_env_dpdk.so 00:01:41.917 CC lib/rpc/rpc.o 00:01:42.176 LIB libspdk_rpc.a 00:01:42.176 SO libspdk_rpc.so.6.0 00:01:42.176 SYMLINK libspdk_rpc.so 00:01:42.497 CC lib/trace/trace.o 00:01:42.497 CC lib/trace/trace_flags.o 00:01:42.497 CC lib/trace/trace_rpc.o 00:01:42.497 CC lib/keyring/keyring.o 00:01:42.497 CC lib/notify/notify.o 00:01:42.497 CC lib/keyring/keyring_rpc.o 00:01:42.497 CC lib/notify/notify_rpc.o 00:01:42.758 LIB libspdk_notify.a 00:01:42.758 SO libspdk_notify.so.6.0 00:01:42.758 LIB libspdk_trace.a 00:01:42.758 LIB libspdk_keyring.a 00:01:43.018 SO libspdk_keyring.so.2.0 00:01:43.018 SO libspdk_trace.so.11.0 00:01:43.018 SYMLINK libspdk_notify.so 00:01:43.018 SYMLINK libspdk_trace.so 00:01:43.018 SYMLINK libspdk_keyring.so 00:01:43.278 CC lib/sock/sock.o 00:01:43.278 CC lib/thread/thread.o 00:01:43.278 CC lib/thread/iobuf.o 00:01:43.278 CC lib/sock/sock_rpc.o 00:01:43.846 LIB libspdk_sock.a 00:01:43.846 SO libspdk_sock.so.10.0 00:01:43.846 SYMLINK libspdk_sock.so 00:01:44.105 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:44.105 CC lib/nvme/nvme_ctrlr.o 00:01:44.105 CC lib/nvme/nvme_ns.o 00:01:44.105 CC lib/nvme/nvme_fabric.o 00:01:44.106 CC lib/nvme/nvme_ns_cmd.o 00:01:44.106 CC lib/nvme/nvme_pcie_common.o 00:01:44.106 CC lib/nvme/nvme_pcie.o 00:01:44.106 CC lib/nvme/nvme_qpair.o 00:01:44.106 CC lib/nvme/nvme.o 00:01:44.106 CC lib/nvme/nvme_quirks.o 00:01:44.106 CC lib/nvme/nvme_transport.o 00:01:44.106 CC lib/nvme/nvme_discovery.o 00:01:44.106 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:44.106 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:44.106 CC lib/nvme/nvme_tcp.o 00:01:44.106 CC lib/nvme/nvme_opal.o 00:01:44.106 CC lib/nvme/nvme_io_msg.o 00:01:44.106 CC lib/nvme/nvme_poll_group.o 00:01:44.106 CC lib/nvme/nvme_zns.o 00:01:44.106 CC lib/nvme/nvme_stubs.o 00:01:44.106 CC lib/nvme/nvme_auth.o 00:01:44.106 CC lib/nvme/nvme_cuse.o 00:01:44.106 CC lib/nvme/nvme_vfio_user.o 00:01:44.106 CC lib/nvme/nvme_rdma.o 00:01:44.677 LIB libspdk_thread.a 00:01:44.678 SO libspdk_thread.so.10.1 00:01:44.678 SYMLINK libspdk_thread.so 00:01:44.939 CC lib/blob/blobstore.o 00:01:44.939 CC lib/blob/request.o 00:01:44.939 CC lib/blob/zeroes.o 00:01:44.939 CC lib/blob/blob_bs_dev.o 00:01:44.939 CC lib/accel/accel_rpc.o 00:01:44.939 CC lib/accel/accel.o 00:01:44.939 CC lib/accel/accel_sw.o 00:01:44.939 CC lib/init/json_config.o 00:01:44.939 CC lib/init/subsystem.o 00:01:44.939 CC lib/init/subsystem_rpc.o 00:01:44.939 CC lib/fsdev/fsdev.o 00:01:44.939 CC lib/init/rpc.o 00:01:44.939 CC lib/fsdev/fsdev_io.o 00:01:44.939 CC lib/fsdev/fsdev_rpc.o 00:01:44.939 CC lib/virtio/virtio.o 00:01:44.939 CC lib/virtio/virtio_vhost_user.o 00:01:44.939 CC lib/virtio/virtio_vfio_user.o 00:01:44.939 CC lib/virtio/virtio_pci.o 00:01:44.939 CC lib/vfu_tgt/tgt_endpoint.o 00:01:44.939 CC lib/vfu_tgt/tgt_rpc.o 00:01:45.199 LIB libspdk_init.a 00:01:45.199 SO libspdk_init.so.6.0 00:01:45.200 LIB libspdk_vfu_tgt.a 00:01:45.461 LIB libspdk_virtio.a 00:01:45.461 SYMLINK libspdk_init.so 00:01:45.461 SO libspdk_vfu_tgt.so.3.0 00:01:45.461 SO libspdk_virtio.so.7.0 00:01:45.461 SYMLINK libspdk_vfu_tgt.so 00:01:45.461 SYMLINK libspdk_virtio.so 00:01:45.722 LIB libspdk_fsdev.a 00:01:45.722 SO libspdk_fsdev.so.1.0 00:01:45.722 CC lib/event/app.o 00:01:45.722 CC lib/event/app_rpc.o 00:01:45.722 CC lib/event/reactor.o 00:01:45.722 CC lib/event/log_rpc.o 00:01:45.722 CC lib/event/scheduler_static.o 00:01:45.722 SYMLINK libspdk_fsdev.so 00:01:45.983 LIB libspdk_accel.a 00:01:45.983 SO libspdk_accel.so.16.0 00:01:45.983 LIB libspdk_nvme.a 00:01:45.983 LIB libspdk_event.a 00:01:45.983 SYMLINK libspdk_accel.so 00:01:45.983 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:46.291 SO libspdk_nvme.so.14.0 00:01:46.291 SO libspdk_event.so.14.0 00:01:46.291 SYMLINK libspdk_event.so 00:01:46.291 SYMLINK libspdk_nvme.so 00:01:46.551 CC lib/bdev/bdev.o 00:01:46.551 CC lib/bdev/bdev_rpc.o 00:01:46.551 CC lib/bdev/bdev_zone.o 00:01:46.551 CC lib/bdev/part.o 00:01:46.551 CC lib/bdev/scsi_nvme.o 00:01:46.813 LIB libspdk_fuse_dispatcher.a 00:01:46.813 SO libspdk_fuse_dispatcher.so.1.0 00:01:46.813 SYMLINK libspdk_fuse_dispatcher.so 00:01:47.756 LIB libspdk_blob.a 00:01:47.756 SO libspdk_blob.so.11.0 00:01:47.756 SYMLINK libspdk_blob.so 00:01:48.016 CC lib/blobfs/blobfs.o 00:01:48.016 CC lib/blobfs/tree.o 00:01:48.016 CC lib/lvol/lvol.o 00:01:48.960 LIB libspdk_bdev.a 00:01:48.960 LIB libspdk_blobfs.a 00:01:48.960 SO libspdk_bdev.so.16.0 00:01:48.960 SO libspdk_blobfs.so.10.0 00:01:48.960 LIB libspdk_lvol.a 00:01:48.960 SYMLINK libspdk_blobfs.so 00:01:48.960 SYMLINK libspdk_bdev.so 00:01:48.960 SO libspdk_lvol.so.10.0 00:01:48.960 SYMLINK libspdk_lvol.so 00:01:49.222 CC lib/ublk/ublk.o 00:01:49.222 CC lib/ublk/ublk_rpc.o 00:01:49.222 CC lib/scsi/dev.o 00:01:49.222 CC lib/scsi/lun.o 00:01:49.222 CC lib/scsi/scsi.o 00:01:49.222 CC lib/scsi/port.o 00:01:49.222 CC lib/scsi/scsi_bdev.o 00:01:49.222 CC lib/scsi/task.o 00:01:49.222 CC lib/scsi/scsi_pr.o 00:01:49.222 CC lib/nvmf/ctrlr.o 00:01:49.222 CC lib/scsi/scsi_rpc.o 00:01:49.222 CC lib/nvmf/ctrlr_discovery.o 00:01:49.222 CC lib/nvmf/ctrlr_bdev.o 00:01:49.222 CC lib/nvmf/subsystem.o 00:01:49.222 CC lib/nvmf/nvmf_rpc.o 00:01:49.222 CC lib/nbd/nbd.o 00:01:49.222 CC lib/nvmf/nvmf.o 00:01:49.222 CC lib/nbd/nbd_rpc.o 00:01:49.222 CC lib/nvmf/transport.o 00:01:49.222 CC lib/nvmf/tcp.o 00:01:49.222 CC lib/ftl/ftl_core.o 00:01:49.222 CC lib/nvmf/stubs.o 00:01:49.222 CC lib/ftl/ftl_init.o 00:01:49.222 CC lib/nvmf/mdns_server.o 00:01:49.222 CC lib/ftl/ftl_layout.o 00:01:49.222 CC lib/nvmf/vfio_user.o 00:01:49.222 CC lib/ftl/ftl_debug.o 00:01:49.222 CC lib/nvmf/rdma.o 00:01:49.222 CC lib/ftl/ftl_sb.o 00:01:49.222 CC lib/ftl/ftl_io.o 00:01:49.222 CC lib/nvmf/auth.o 00:01:49.222 CC lib/ftl/ftl_l2p.o 00:01:49.222 CC lib/ftl/ftl_l2p_flat.o 00:01:49.222 CC lib/ftl/ftl_nv_cache.o 00:01:49.222 CC lib/ftl/ftl_band_ops.o 00:01:49.222 CC lib/ftl/ftl_band.o 00:01:49.222 CC lib/ftl/ftl_writer.o 00:01:49.222 CC lib/ftl/ftl_rq.o 00:01:49.222 CC lib/ftl/ftl_reloc.o 00:01:49.222 CC lib/ftl/ftl_l2p_cache.o 00:01:49.222 CC lib/ftl/ftl_p2l.o 00:01:49.222 CC lib/ftl/ftl_p2l_log.o 00:01:49.222 CC lib/ftl/mngt/ftl_mngt.o 00:01:49.222 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:49.222 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:49.222 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:49.222 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:49.222 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:49.222 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:49.481 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:49.482 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:49.482 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:49.482 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:49.482 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:49.482 CC lib/ftl/utils/ftl_md.o 00:01:49.482 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:49.482 CC lib/ftl/utils/ftl_bitmap.o 00:01:49.482 CC lib/ftl/utils/ftl_conf.o 00:01:49.482 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:49.482 CC lib/ftl/utils/ftl_mempool.o 00:01:49.482 CC lib/ftl/utils/ftl_property.o 00:01:49.482 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:49.482 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:49.482 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:49.482 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:49.482 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:49.482 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:49.482 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:49.482 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:49.482 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:49.482 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:49.482 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:01:49.482 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:01:49.482 CC lib/ftl/base/ftl_base_dev.o 00:01:49.482 CC lib/ftl/ftl_trace.o 00:01:49.482 CC lib/ftl/base/ftl_base_bdev.o 00:01:50.053 LIB libspdk_nbd.a 00:01:50.053 SO libspdk_nbd.so.7.0 00:01:50.053 LIB libspdk_scsi.a 00:01:50.053 SO libspdk_scsi.so.9.0 00:01:50.053 SYMLINK libspdk_nbd.so 00:01:50.053 SYMLINK libspdk_scsi.so 00:01:50.053 LIB libspdk_ublk.a 00:01:50.314 SO libspdk_ublk.so.3.0 00:01:50.314 SYMLINK libspdk_ublk.so 00:01:50.314 LIB libspdk_ftl.a 00:01:50.574 CC lib/iscsi/conn.o 00:01:50.574 CC lib/iscsi/init_grp.o 00:01:50.574 CC lib/iscsi/iscsi.o 00:01:50.574 CC lib/iscsi/param.o 00:01:50.574 CC lib/vhost/vhost.o 00:01:50.574 CC lib/iscsi/portal_grp.o 00:01:50.574 CC lib/iscsi/tgt_node.o 00:01:50.574 CC lib/vhost/vhost_rpc.o 00:01:50.574 CC lib/iscsi/iscsi_subsystem.o 00:01:50.574 CC lib/vhost/vhost_scsi.o 00:01:50.574 CC lib/iscsi/iscsi_rpc.o 00:01:50.574 CC lib/vhost/vhost_blk.o 00:01:50.574 CC lib/iscsi/task.o 00:01:50.574 CC lib/vhost/rte_vhost_user.o 00:01:50.574 SO libspdk_ftl.so.9.0 00:01:50.835 SYMLINK libspdk_ftl.so 00:01:51.407 LIB libspdk_nvmf.a 00:01:51.407 SO libspdk_nvmf.so.19.0 00:01:51.407 LIB libspdk_vhost.a 00:01:51.407 SO libspdk_vhost.so.8.0 00:01:51.670 SYMLINK libspdk_vhost.so 00:01:51.670 SYMLINK libspdk_nvmf.so 00:01:51.670 LIB libspdk_iscsi.a 00:01:51.670 SO libspdk_iscsi.so.8.0 00:01:51.932 SYMLINK libspdk_iscsi.so 00:01:52.504 CC module/env_dpdk/env_dpdk_rpc.o 00:01:52.504 CC module/vfu_device/vfu_virtio.o 00:01:52.504 CC module/vfu_device/vfu_virtio_blk.o 00:01:52.504 CC module/vfu_device/vfu_virtio_scsi.o 00:01:52.504 CC module/vfu_device/vfu_virtio_rpc.o 00:01:52.504 CC module/vfu_device/vfu_virtio_fs.o 00:01:52.765 CC module/accel/iaa/accel_iaa.o 00:01:52.765 CC module/accel/iaa/accel_iaa_rpc.o 00:01:52.765 CC module/accel/error/accel_error.o 00:01:52.765 CC module/sock/posix/posix.o 00:01:52.765 CC module/accel/error/accel_error_rpc.o 00:01:52.765 LIB libspdk_env_dpdk_rpc.a 00:01:52.765 CC module/accel/dsa/accel_dsa.o 00:01:52.765 CC module/accel/dsa/accel_dsa_rpc.o 00:01:52.765 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:52.765 CC module/scheduler/gscheduler/gscheduler.o 00:01:52.765 CC module/accel/ioat/accel_ioat.o 00:01:52.765 CC module/accel/ioat/accel_ioat_rpc.o 00:01:52.765 CC module/keyring/linux/keyring.o 00:01:52.765 CC module/keyring/linux/keyring_rpc.o 00:01:52.765 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:52.765 CC module/blob/bdev/blob_bdev.o 00:01:52.765 CC module/fsdev/aio/fsdev_aio.o 00:01:52.765 CC module/fsdev/aio/fsdev_aio_rpc.o 00:01:52.765 CC module/keyring/file/keyring.o 00:01:52.765 CC module/fsdev/aio/linux_aio_mgr.o 00:01:52.765 CC module/keyring/file/keyring_rpc.o 00:01:52.765 SO libspdk_env_dpdk_rpc.so.6.0 00:01:52.765 SYMLINK libspdk_env_dpdk_rpc.so 00:01:52.765 LIB libspdk_scheduler_gscheduler.a 00:01:52.765 LIB libspdk_scheduler_dpdk_governor.a 00:01:52.765 LIB libspdk_accel_error.a 00:01:52.765 LIB libspdk_keyring_linux.a 00:01:52.765 LIB libspdk_keyring_file.a 00:01:53.026 LIB libspdk_accel_iaa.a 00:01:53.026 SO libspdk_scheduler_gscheduler.so.4.0 00:01:53.026 LIB libspdk_scheduler_dynamic.a 00:01:53.026 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:53.026 LIB libspdk_accel_ioat.a 00:01:53.026 SO libspdk_accel_error.so.2.0 00:01:53.026 SO libspdk_keyring_linux.so.1.0 00:01:53.026 SO libspdk_keyring_file.so.2.0 00:01:53.026 SO libspdk_accel_iaa.so.3.0 00:01:53.026 SO libspdk_scheduler_dynamic.so.4.0 00:01:53.026 SYMLINK libspdk_scheduler_gscheduler.so 00:01:53.026 SO libspdk_accel_ioat.so.6.0 00:01:53.026 LIB libspdk_blob_bdev.a 00:01:53.026 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:53.026 SYMLINK libspdk_accel_error.so 00:01:53.026 SYMLINK libspdk_keyring_linux.so 00:01:53.026 LIB libspdk_accel_dsa.a 00:01:53.026 SYMLINK libspdk_scheduler_dynamic.so 00:01:53.026 SYMLINK libspdk_keyring_file.so 00:01:53.026 SO libspdk_blob_bdev.so.11.0 00:01:53.026 SYMLINK libspdk_accel_iaa.so 00:01:53.026 SYMLINK libspdk_accel_ioat.so 00:01:53.026 SO libspdk_accel_dsa.so.5.0 00:01:53.026 SYMLINK libspdk_blob_bdev.so 00:01:53.026 LIB libspdk_vfu_device.a 00:01:53.026 SYMLINK libspdk_accel_dsa.so 00:01:53.287 SO libspdk_vfu_device.so.3.0 00:01:53.287 SYMLINK libspdk_vfu_device.so 00:01:53.287 LIB libspdk_fsdev_aio.a 00:01:53.287 SO libspdk_fsdev_aio.so.1.0 00:01:53.287 LIB libspdk_sock_posix.a 00:01:53.548 SO libspdk_sock_posix.so.6.0 00:01:53.548 SYMLINK libspdk_fsdev_aio.so 00:01:53.548 SYMLINK libspdk_sock_posix.so 00:01:53.548 CC module/bdev/delay/vbdev_delay.o 00:01:53.548 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:53.548 CC module/bdev/error/vbdev_error.o 00:01:53.548 CC module/bdev/error/vbdev_error_rpc.o 00:01:53.548 CC module/bdev/malloc/bdev_malloc.o 00:01:53.548 CC module/bdev/raid/bdev_raid_rpc.o 00:01:53.548 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:53.548 CC module/bdev/raid/bdev_raid.o 00:01:53.548 CC module/bdev/gpt/gpt.o 00:01:53.548 CC module/bdev/raid/bdev_raid_sb.o 00:01:53.548 CC module/bdev/raid/raid0.o 00:01:53.548 CC module/bdev/gpt/vbdev_gpt.o 00:01:53.548 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:53.548 CC module/blobfs/bdev/blobfs_bdev.o 00:01:53.548 CC module/bdev/raid/raid1.o 00:01:53.548 CC module/bdev/raid/concat.o 00:01:53.548 CC module/bdev/iscsi/bdev_iscsi.o 00:01:53.548 CC module/bdev/aio/bdev_aio.o 00:01:53.548 CC module/bdev/passthru/vbdev_passthru.o 00:01:53.548 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:53.548 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:53.548 CC module/bdev/aio/bdev_aio_rpc.o 00:01:53.548 CC module/bdev/lvol/vbdev_lvol.o 00:01:53.548 CC module/bdev/split/vbdev_split.o 00:01:53.548 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:53.548 CC module/bdev/split/vbdev_split_rpc.o 00:01:53.548 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:53.548 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:53.548 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:53.548 CC module/bdev/null/bdev_null_rpc.o 00:01:53.548 CC module/bdev/ftl/bdev_ftl.o 00:01:53.548 CC module/bdev/null/bdev_null.o 00:01:53.548 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:53.548 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:53.548 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:53.548 CC module/bdev/nvme/bdev_nvme.o 00:01:53.548 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:53.548 CC module/bdev/nvme/nvme_rpc.o 00:01:53.548 CC module/bdev/nvme/bdev_mdns_client.o 00:01:53.548 CC module/bdev/nvme/vbdev_opal.o 00:01:53.548 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:53.548 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:53.809 LIB libspdk_blobfs_bdev.a 00:01:53.809 LIB libspdk_bdev_split.a 00:01:53.809 SO libspdk_blobfs_bdev.so.6.0 00:01:54.071 LIB libspdk_bdev_ftl.a 00:01:54.071 LIB libspdk_bdev_error.a 00:01:54.071 SO libspdk_bdev_split.so.6.0 00:01:54.071 SO libspdk_bdev_error.so.6.0 00:01:54.071 SO libspdk_bdev_ftl.so.6.0 00:01:54.071 LIB libspdk_bdev_gpt.a 00:01:54.071 LIB libspdk_bdev_passthru.a 00:01:54.071 LIB libspdk_bdev_null.a 00:01:54.071 SYMLINK libspdk_blobfs_bdev.so 00:01:54.071 SO libspdk_bdev_null.so.6.0 00:01:54.071 LIB libspdk_bdev_malloc.a 00:01:54.071 SO libspdk_bdev_gpt.so.6.0 00:01:54.071 SO libspdk_bdev_passthru.so.6.0 00:01:54.071 LIB libspdk_bdev_zone_block.a 00:01:54.071 SYMLINK libspdk_bdev_split.so 00:01:54.071 SYMLINK libspdk_bdev_error.so 00:01:54.071 SYMLINK libspdk_bdev_ftl.so 00:01:54.071 LIB libspdk_bdev_aio.a 00:01:54.071 LIB libspdk_bdev_delay.a 00:01:54.071 SO libspdk_bdev_malloc.so.6.0 00:01:54.071 LIB libspdk_bdev_iscsi.a 00:01:54.071 SYMLINK libspdk_bdev_null.so 00:01:54.071 SYMLINK libspdk_bdev_passthru.so 00:01:54.071 SO libspdk_bdev_zone_block.so.6.0 00:01:54.071 SO libspdk_bdev_aio.so.6.0 00:01:54.071 SO libspdk_bdev_delay.so.6.0 00:01:54.071 SO libspdk_bdev_iscsi.so.6.0 00:01:54.071 SYMLINK libspdk_bdev_gpt.so 00:01:54.071 SYMLINK libspdk_bdev_malloc.so 00:01:54.071 SYMLINK libspdk_bdev_aio.so 00:01:54.071 SYMLINK libspdk_bdev_zone_block.so 00:01:54.071 SYMLINK libspdk_bdev_delay.so 00:01:54.071 SYMLINK libspdk_bdev_iscsi.so 00:01:54.071 LIB libspdk_bdev_lvol.a 00:01:54.332 LIB libspdk_bdev_virtio.a 00:01:54.332 SO libspdk_bdev_lvol.so.6.0 00:01:54.332 SO libspdk_bdev_virtio.so.6.0 00:01:54.332 SYMLINK libspdk_bdev_lvol.so 00:01:54.332 SYMLINK libspdk_bdev_virtio.so 00:01:54.593 LIB libspdk_bdev_raid.a 00:01:54.593 SO libspdk_bdev_raid.so.6.0 00:01:54.856 SYMLINK libspdk_bdev_raid.so 00:01:55.800 LIB libspdk_bdev_nvme.a 00:01:55.800 SO libspdk_bdev_nvme.so.7.0 00:01:55.800 SYMLINK libspdk_bdev_nvme.so 00:01:56.744 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:56.744 CC module/event/subsystems/sock/sock.o 00:01:56.744 CC module/event/subsystems/keyring/keyring.o 00:01:56.744 CC module/event/subsystems/iobuf/iobuf.o 00:01:56.744 CC module/event/subsystems/vmd/vmd.o 00:01:56.744 CC module/event/subsystems/scheduler/scheduler.o 00:01:56.744 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:56.744 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:56.744 CC module/event/subsystems/fsdev/fsdev.o 00:01:56.744 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:56.744 LIB libspdk_event_keyring.a 00:01:56.744 LIB libspdk_event_vhost_blk.a 00:01:56.744 LIB libspdk_event_fsdev.a 00:01:56.744 LIB libspdk_event_vfu_tgt.a 00:01:56.744 LIB libspdk_event_scheduler.a 00:01:56.744 LIB libspdk_event_sock.a 00:01:56.744 LIB libspdk_event_vmd.a 00:01:56.744 SO libspdk_event_keyring.so.1.0 00:01:56.744 SO libspdk_event_vhost_blk.so.3.0 00:01:56.744 LIB libspdk_event_iobuf.a 00:01:56.744 SO libspdk_event_fsdev.so.1.0 00:01:56.744 SO libspdk_event_vmd.so.6.0 00:01:56.744 SO libspdk_event_vfu_tgt.so.3.0 00:01:56.744 SO libspdk_event_scheduler.so.4.0 00:01:56.744 SO libspdk_event_sock.so.5.0 00:01:56.744 SO libspdk_event_iobuf.so.3.0 00:01:56.744 SYMLINK libspdk_event_keyring.so 00:01:56.744 SYMLINK libspdk_event_vhost_blk.so 00:01:56.744 SYMLINK libspdk_event_fsdev.so 00:01:56.745 SYMLINK libspdk_event_vmd.so 00:01:56.745 SYMLINK libspdk_event_vfu_tgt.so 00:01:57.006 SYMLINK libspdk_event_sock.so 00:01:57.006 SYMLINK libspdk_event_scheduler.so 00:01:57.006 SYMLINK libspdk_event_iobuf.so 00:01:57.266 CC module/event/subsystems/accel/accel.o 00:01:57.526 LIB libspdk_event_accel.a 00:01:57.526 SO libspdk_event_accel.so.6.0 00:01:57.526 SYMLINK libspdk_event_accel.so 00:01:57.786 CC module/event/subsystems/bdev/bdev.o 00:01:58.047 LIB libspdk_event_bdev.a 00:01:58.047 SO libspdk_event_bdev.so.6.0 00:01:58.047 SYMLINK libspdk_event_bdev.so 00:01:58.620 CC module/event/subsystems/scsi/scsi.o 00:01:58.620 CC module/event/subsystems/nbd/nbd.o 00:01:58.620 CC module/event/subsystems/ublk/ublk.o 00:01:58.620 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:58.620 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:58.620 LIB libspdk_event_nbd.a 00:01:58.620 LIB libspdk_event_ublk.a 00:01:58.620 LIB libspdk_event_scsi.a 00:01:58.620 SO libspdk_event_nbd.so.6.0 00:01:58.880 SO libspdk_event_ublk.so.3.0 00:01:58.880 SO libspdk_event_scsi.so.6.0 00:01:58.880 LIB libspdk_event_nvmf.a 00:01:58.880 SYMLINK libspdk_event_ublk.so 00:01:58.880 SYMLINK libspdk_event_nbd.so 00:01:58.880 SYMLINK libspdk_event_scsi.so 00:01:58.880 SO libspdk_event_nvmf.so.6.0 00:01:58.880 SYMLINK libspdk_event_nvmf.so 00:01:59.140 CC module/event/subsystems/iscsi/iscsi.o 00:01:59.140 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:59.399 LIB libspdk_event_vhost_scsi.a 00:01:59.399 LIB libspdk_event_iscsi.a 00:01:59.399 SO libspdk_event_vhost_scsi.so.3.0 00:01:59.399 SO libspdk_event_iscsi.so.6.0 00:01:59.399 SYMLINK libspdk_event_vhost_scsi.so 00:01:59.399 SYMLINK libspdk_event_iscsi.so 00:01:59.660 SO libspdk.so.6.0 00:01:59.660 SYMLINK libspdk.so 00:01:59.921 CC app/spdk_nvme_perf/perf.o 00:02:00.183 CC app/trace_record/trace_record.o 00:02:00.183 CC app/spdk_nvme_discover/discovery_aer.o 00:02:00.183 CXX app/trace/trace.o 00:02:00.183 CC app/spdk_lspci/spdk_lspci.o 00:02:00.183 CC test/rpc_client/rpc_client_test.o 00:02:00.183 CC app/spdk_nvme_identify/identify.o 00:02:00.183 TEST_HEADER include/spdk/accel.h 00:02:00.183 TEST_HEADER include/spdk/accel_module.h 00:02:00.183 CC app/spdk_top/spdk_top.o 00:02:00.183 TEST_HEADER include/spdk/assert.h 00:02:00.183 TEST_HEADER include/spdk/barrier.h 00:02:00.183 TEST_HEADER include/spdk/base64.h 00:02:00.183 TEST_HEADER include/spdk/bit_array.h 00:02:00.183 TEST_HEADER include/spdk/bdev_zone.h 00:02:00.183 TEST_HEADER include/spdk/bdev_module.h 00:02:00.183 TEST_HEADER include/spdk/bdev.h 00:02:00.183 TEST_HEADER include/spdk/bit_pool.h 00:02:00.183 TEST_HEADER include/spdk/blob_bdev.h 00:02:00.183 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:00.183 TEST_HEADER include/spdk/blobfs.h 00:02:00.183 TEST_HEADER include/spdk/blob.h 00:02:00.183 TEST_HEADER include/spdk/conf.h 00:02:00.183 TEST_HEADER include/spdk/config.h 00:02:00.183 TEST_HEADER include/spdk/cpuset.h 00:02:00.183 TEST_HEADER include/spdk/crc16.h 00:02:00.183 TEST_HEADER include/spdk/crc32.h 00:02:00.183 TEST_HEADER include/spdk/crc64.h 00:02:00.183 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:00.183 TEST_HEADER include/spdk/dma.h 00:02:00.183 TEST_HEADER include/spdk/dif.h 00:02:00.183 TEST_HEADER include/spdk/endian.h 00:02:00.183 CC app/iscsi_tgt/iscsi_tgt.o 00:02:00.183 TEST_HEADER include/spdk/env_dpdk.h 00:02:00.183 TEST_HEADER include/spdk/env.h 00:02:00.183 TEST_HEADER include/spdk/event.h 00:02:00.183 TEST_HEADER include/spdk/fd_group.h 00:02:00.183 TEST_HEADER include/spdk/fd.h 00:02:00.183 TEST_HEADER include/spdk/file.h 00:02:00.183 TEST_HEADER include/spdk/fsdev.h 00:02:00.183 TEST_HEADER include/spdk/fsdev_module.h 00:02:00.183 TEST_HEADER include/spdk/ftl.h 00:02:00.183 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:00.183 TEST_HEADER include/spdk/hexlify.h 00:02:00.183 TEST_HEADER include/spdk/gpt_spec.h 00:02:00.183 CC app/spdk_dd/spdk_dd.o 00:02:00.183 CC app/nvmf_tgt/nvmf_main.o 00:02:00.183 TEST_HEADER include/spdk/idxd.h 00:02:00.183 TEST_HEADER include/spdk/histogram_data.h 00:02:00.183 TEST_HEADER include/spdk/idxd_spec.h 00:02:00.183 TEST_HEADER include/spdk/init.h 00:02:00.183 TEST_HEADER include/spdk/ioat.h 00:02:00.183 TEST_HEADER include/spdk/iscsi_spec.h 00:02:00.183 TEST_HEADER include/spdk/ioat_spec.h 00:02:00.183 TEST_HEADER include/spdk/json.h 00:02:00.183 TEST_HEADER include/spdk/jsonrpc.h 00:02:00.183 TEST_HEADER include/spdk/keyring.h 00:02:00.183 TEST_HEADER include/spdk/keyring_module.h 00:02:00.183 TEST_HEADER include/spdk/log.h 00:02:00.183 TEST_HEADER include/spdk/likely.h 00:02:00.183 TEST_HEADER include/spdk/memory.h 00:02:00.183 TEST_HEADER include/spdk/lvol.h 00:02:00.183 TEST_HEADER include/spdk/md5.h 00:02:00.183 TEST_HEADER include/spdk/mmio.h 00:02:00.183 CC app/spdk_tgt/spdk_tgt.o 00:02:00.183 TEST_HEADER include/spdk/net.h 00:02:00.183 TEST_HEADER include/spdk/nbd.h 00:02:00.183 TEST_HEADER include/spdk/notify.h 00:02:00.183 TEST_HEADER include/spdk/nvme.h 00:02:00.183 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:00.183 TEST_HEADER include/spdk/nvme_intel.h 00:02:00.183 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:00.183 TEST_HEADER include/spdk/nvme_zns.h 00:02:00.183 TEST_HEADER include/spdk/nvme_spec.h 00:02:00.183 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:00.183 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:00.183 TEST_HEADER include/spdk/nvmf_spec.h 00:02:00.183 TEST_HEADER include/spdk/nvmf.h 00:02:00.184 TEST_HEADER include/spdk/nvmf_transport.h 00:02:00.184 TEST_HEADER include/spdk/pci_ids.h 00:02:00.184 TEST_HEADER include/spdk/opal.h 00:02:00.184 TEST_HEADER include/spdk/opal_spec.h 00:02:00.184 TEST_HEADER include/spdk/pipe.h 00:02:00.184 TEST_HEADER include/spdk/queue.h 00:02:00.184 TEST_HEADER include/spdk/reduce.h 00:02:00.184 TEST_HEADER include/spdk/rpc.h 00:02:00.184 TEST_HEADER include/spdk/scsi.h 00:02:00.184 TEST_HEADER include/spdk/scheduler.h 00:02:00.184 TEST_HEADER include/spdk/scsi_spec.h 00:02:00.184 TEST_HEADER include/spdk/sock.h 00:02:00.184 TEST_HEADER include/spdk/stdinc.h 00:02:00.184 TEST_HEADER include/spdk/string.h 00:02:00.184 TEST_HEADER include/spdk/thread.h 00:02:00.184 TEST_HEADER include/spdk/trace.h 00:02:00.184 TEST_HEADER include/spdk/trace_parser.h 00:02:00.184 TEST_HEADER include/spdk/tree.h 00:02:00.184 TEST_HEADER include/spdk/ublk.h 00:02:00.184 TEST_HEADER include/spdk/util.h 00:02:00.184 TEST_HEADER include/spdk/uuid.h 00:02:00.184 TEST_HEADER include/spdk/version.h 00:02:00.184 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:00.184 TEST_HEADER include/spdk/vhost.h 00:02:00.184 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:00.184 TEST_HEADER include/spdk/vmd.h 00:02:00.184 TEST_HEADER include/spdk/zipf.h 00:02:00.184 TEST_HEADER include/spdk/xor.h 00:02:00.184 CXX test/cpp_headers/accel.o 00:02:00.184 CXX test/cpp_headers/accel_module.o 00:02:00.184 CXX test/cpp_headers/assert.o 00:02:00.184 CXX test/cpp_headers/barrier.o 00:02:00.184 CXX test/cpp_headers/base64.o 00:02:00.184 CXX test/cpp_headers/bdev.o 00:02:00.184 CXX test/cpp_headers/bdev_zone.o 00:02:00.184 CXX test/cpp_headers/bdev_module.o 00:02:00.184 CXX test/cpp_headers/bit_pool.o 00:02:00.184 CXX test/cpp_headers/bit_array.o 00:02:00.184 CXX test/cpp_headers/blob_bdev.o 00:02:00.184 CXX test/cpp_headers/blobfs.o 00:02:00.184 CXX test/cpp_headers/blobfs_bdev.o 00:02:00.184 CXX test/cpp_headers/blob.o 00:02:00.184 CXX test/cpp_headers/config.o 00:02:00.184 CXX test/cpp_headers/crc16.o 00:02:00.184 CXX test/cpp_headers/conf.o 00:02:00.184 CXX test/cpp_headers/cpuset.o 00:02:00.184 CXX test/cpp_headers/crc32.o 00:02:00.184 CXX test/cpp_headers/crc64.o 00:02:00.184 CXX test/cpp_headers/dif.o 00:02:00.184 CXX test/cpp_headers/dma.o 00:02:00.184 CXX test/cpp_headers/endian.o 00:02:00.184 CXX test/cpp_headers/env_dpdk.o 00:02:00.184 CXX test/cpp_headers/event.o 00:02:00.184 CXX test/cpp_headers/env.o 00:02:00.184 CXX test/cpp_headers/fd.o 00:02:00.184 CXX test/cpp_headers/fd_group.o 00:02:00.184 CXX test/cpp_headers/file.o 00:02:00.184 CXX test/cpp_headers/fsdev.o 00:02:00.184 CXX test/cpp_headers/fsdev_module.o 00:02:00.184 CXX test/cpp_headers/fuse_dispatcher.o 00:02:00.184 CXX test/cpp_headers/ftl.o 00:02:00.184 CXX test/cpp_headers/hexlify.o 00:02:00.184 CXX test/cpp_headers/gpt_spec.o 00:02:00.184 CXX test/cpp_headers/histogram_data.o 00:02:00.184 CXX test/cpp_headers/idxd.o 00:02:00.184 CXX test/cpp_headers/ioat.o 00:02:00.184 CXX test/cpp_headers/init.o 00:02:00.184 CXX test/cpp_headers/idxd_spec.o 00:02:00.184 CXX test/cpp_headers/ioat_spec.o 00:02:00.184 CXX test/cpp_headers/jsonrpc.o 00:02:00.184 CXX test/cpp_headers/json.o 00:02:00.184 CXX test/cpp_headers/iscsi_spec.o 00:02:00.184 CXX test/cpp_headers/keyring.o 00:02:00.184 CXX test/cpp_headers/log.o 00:02:00.184 CXX test/cpp_headers/lvol.o 00:02:00.184 CXX test/cpp_headers/keyring_module.o 00:02:00.184 CXX test/cpp_headers/likely.o 00:02:00.184 CC examples/util/zipf/zipf.o 00:02:00.184 CXX test/cpp_headers/md5.o 00:02:00.184 CXX test/cpp_headers/nbd.o 00:02:00.184 CXX test/cpp_headers/memory.o 00:02:00.184 CXX test/cpp_headers/net.o 00:02:00.184 CXX test/cpp_headers/mmio.o 00:02:00.184 CXX test/cpp_headers/notify.o 00:02:00.184 CXX test/cpp_headers/nvme.o 00:02:00.449 CXX test/cpp_headers/nvme_spec.o 00:02:00.449 CXX test/cpp_headers/nvme_ocssd.o 00:02:00.449 CXX test/cpp_headers/nvme_intel.o 00:02:00.449 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:00.449 CC test/env/memory/memory_ut.o 00:02:00.449 CXX test/cpp_headers/nvme_zns.o 00:02:00.449 LINK spdk_lspci 00:02:00.449 CXX test/cpp_headers/nvmf_cmd.o 00:02:00.449 CC test/env/vtophys/vtophys.o 00:02:00.450 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:00.450 CXX test/cpp_headers/nvmf_spec.o 00:02:00.450 CXX test/cpp_headers/nvmf_transport.o 00:02:00.450 CC examples/ioat/verify/verify.o 00:02:00.450 CXX test/cpp_headers/opal_spec.o 00:02:00.450 CXX test/cpp_headers/nvmf.o 00:02:00.450 CXX test/cpp_headers/opal.o 00:02:00.450 CXX test/cpp_headers/pipe.o 00:02:00.450 CXX test/cpp_headers/pci_ids.o 00:02:00.450 CXX test/cpp_headers/reduce.o 00:02:00.450 CXX test/cpp_headers/rpc.o 00:02:00.450 CXX test/cpp_headers/scheduler.o 00:02:00.450 CXX test/cpp_headers/queue.o 00:02:00.450 CXX test/cpp_headers/scsi.o 00:02:00.450 CXX test/cpp_headers/sock.o 00:02:00.450 CXX test/cpp_headers/scsi_spec.o 00:02:00.450 CXX test/cpp_headers/stdinc.o 00:02:00.450 CXX test/cpp_headers/thread.o 00:02:00.450 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:00.450 CXX test/cpp_headers/string.o 00:02:00.450 CXX test/cpp_headers/trace_parser.o 00:02:00.450 CC test/app/jsoncat/jsoncat.o 00:02:00.450 CC test/env/pci/pci_ut.o 00:02:00.450 CXX test/cpp_headers/tree.o 00:02:00.450 CXX test/cpp_headers/ublk.o 00:02:00.450 CC test/thread/poller_perf/poller_perf.o 00:02:00.450 CXX test/cpp_headers/trace.o 00:02:00.450 CXX test/cpp_headers/util.o 00:02:00.450 CXX test/cpp_headers/uuid.o 00:02:00.450 CXX test/cpp_headers/version.o 00:02:00.450 CXX test/cpp_headers/vfio_user_spec.o 00:02:00.450 CC examples/ioat/perf/perf.o 00:02:00.450 CC test/app/stub/stub.o 00:02:00.450 CXX test/cpp_headers/vfio_user_pci.o 00:02:00.450 CXX test/cpp_headers/vhost.o 00:02:00.450 CXX test/cpp_headers/zipf.o 00:02:00.450 CXX test/cpp_headers/xor.o 00:02:00.450 CXX test/cpp_headers/vmd.o 00:02:00.450 CC app/fio/nvme/fio_plugin.o 00:02:00.450 CC test/app/histogram_perf/histogram_perf.o 00:02:00.450 CC app/fio/bdev/fio_plugin.o 00:02:00.450 CC test/dma/test_dma/test_dma.o 00:02:00.450 CC test/app/bdev_svc/bdev_svc.o 00:02:00.450 LINK spdk_nvme_discover 00:02:00.450 LINK interrupt_tgt 00:02:00.450 LINK rpc_client_test 00:02:00.450 LINK nvmf_tgt 00:02:00.450 LINK iscsi_tgt 00:02:00.713 LINK spdk_trace_record 00:02:00.713 LINK spdk_tgt 00:02:00.713 CC test/env/mem_callbacks/mem_callbacks.o 00:02:00.713 LINK spdk_trace 00:02:00.974 LINK zipf 00:02:00.974 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:00.974 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:00.974 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:00.974 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:00.974 LINK spdk_dd 00:02:00.974 LINK vtophys 00:02:00.974 LINK env_dpdk_post_init 00:02:00.974 LINK jsoncat 00:02:00.974 LINK ioat_perf 00:02:01.233 LINK bdev_svc 00:02:01.233 LINK poller_perf 00:02:01.233 LINK histogram_perf 00:02:01.233 LINK verify 00:02:01.233 LINK stub 00:02:01.233 LINK spdk_nvme_perf 00:02:01.233 CC examples/sock/hello_world/hello_sock.o 00:02:01.233 CC app/vhost/vhost.o 00:02:01.233 CC examples/idxd/perf/perf.o 00:02:01.233 CC examples/vmd/lsvmd/lsvmd.o 00:02:01.494 CC examples/vmd/led/led.o 00:02:01.494 LINK pci_ut 00:02:01.494 CC examples/thread/thread/thread_ex.o 00:02:01.494 LINK test_dma 00:02:01.494 LINK spdk_bdev 00:02:01.494 LINK nvme_fuzz 00:02:01.494 LINK spdk_nvme_identify 00:02:01.494 LINK spdk_nvme 00:02:01.494 LINK led 00:02:01.494 LINK vhost_fuzz 00:02:01.494 LINK lsvmd 00:02:01.494 LINK spdk_top 00:02:01.494 LINK vhost 00:02:01.494 LINK hello_sock 00:02:01.494 CC test/event/event_perf/event_perf.o 00:02:01.494 CC test/event/app_repeat/app_repeat.o 00:02:01.494 LINK mem_callbacks 00:02:01.494 CC test/event/reactor/reactor.o 00:02:01.494 CC test/event/reactor_perf/reactor_perf.o 00:02:01.754 CC test/event/scheduler/scheduler.o 00:02:01.754 LINK idxd_perf 00:02:01.754 LINK thread 00:02:01.754 LINK reactor_perf 00:02:01.754 LINK event_perf 00:02:01.754 LINK app_repeat 00:02:01.754 LINK reactor 00:02:02.015 LINK scheduler 00:02:02.015 CC test/nvme/overhead/overhead.o 00:02:02.015 CC test/nvme/sgl/sgl.o 00:02:02.015 CC test/nvme/reserve/reserve.o 00:02:02.015 CC test/nvme/cuse/cuse.o 00:02:02.015 CC test/nvme/e2edp/nvme_dp.o 00:02:02.015 CC test/nvme/err_injection/err_injection.o 00:02:02.015 CC test/nvme/reset/reset.o 00:02:02.015 CC test/nvme/compliance/nvme_compliance.o 00:02:02.015 CC test/nvme/startup/startup.o 00:02:02.015 CC test/nvme/aer/aer.o 00:02:02.015 CC test/nvme/fused_ordering/fused_ordering.o 00:02:02.015 CC test/nvme/connect_stress/connect_stress.o 00:02:02.015 CC test/nvme/simple_copy/simple_copy.o 00:02:02.015 CC test/nvme/boot_partition/boot_partition.o 00:02:02.015 CC test/nvme/fdp/fdp.o 00:02:02.015 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:02.015 CC test/blobfs/mkfs/mkfs.o 00:02:02.015 CC test/accel/dif/dif.o 00:02:02.015 LINK memory_ut 00:02:02.015 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:02.015 CC examples/nvme/arbitration/arbitration.o 00:02:02.015 CC examples/nvme/hotplug/hotplug.o 00:02:02.015 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:02.015 CC examples/nvme/hello_world/hello_world.o 00:02:02.015 CC examples/nvme/reconnect/reconnect.o 00:02:02.015 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:02.015 CC examples/nvme/abort/abort.o 00:02:02.276 CC test/lvol/esnap/esnap.o 00:02:02.276 CC examples/accel/perf/accel_perf.o 00:02:02.276 LINK err_injection 00:02:02.276 LINK doorbell_aers 00:02:02.276 LINK startup 00:02:02.276 LINK boot_partition 00:02:02.276 LINK connect_stress 00:02:02.276 LINK reserve 00:02:02.276 LINK sgl 00:02:02.276 LINK overhead 00:02:02.276 CC examples/blob/hello_world/hello_blob.o 00:02:02.276 LINK fused_ordering 00:02:02.276 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:02.276 LINK mkfs 00:02:02.276 LINK pmr_persistence 00:02:02.276 CC examples/blob/cli/blobcli.o 00:02:02.276 LINK cmb_copy 00:02:02.276 LINK nvme_dp 00:02:02.276 LINK simple_copy 00:02:02.276 LINK aer 00:02:02.276 LINK reset 00:02:02.276 LINK hello_world 00:02:02.276 LINK fdp 00:02:02.276 LINK nvme_compliance 00:02:02.276 LINK hotplug 00:02:02.537 LINK arbitration 00:02:02.537 LINK reconnect 00:02:02.537 LINK abort 00:02:02.537 LINK iscsi_fuzz 00:02:02.537 LINK hello_blob 00:02:02.537 LINK nvme_manage 00:02:02.537 LINK hello_fsdev 00:02:02.537 LINK dif 00:02:02.798 LINK accel_perf 00:02:02.798 LINK blobcli 00:02:03.058 LINK cuse 00:02:03.318 CC test/bdev/bdevio/bdevio.o 00:02:03.318 CC examples/bdev/hello_world/hello_bdev.o 00:02:03.318 CC examples/bdev/bdevperf/bdevperf.o 00:02:03.578 LINK hello_bdev 00:02:03.578 LINK bdevio 00:02:04.149 LINK bdevperf 00:02:04.720 CC examples/nvmf/nvmf/nvmf.o 00:02:04.981 LINK nvmf 00:02:06.366 LINK esnap 00:02:06.943 00:02:06.943 real 0m54.600s 00:02:06.943 user 7m47.844s 00:02:06.943 sys 4m54.575s 00:02:06.943 14:59:16 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:06.943 14:59:16 make -- common/autotest_common.sh@10 -- $ set +x 00:02:06.943 ************************************ 00:02:06.943 END TEST make 00:02:06.943 ************************************ 00:02:06.943 14:59:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:06.943 14:59:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:06.943 14:59:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:06.943 14:59:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.943 14:59:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:06.943 14:59:16 -- pm/common@44 -- $ pid=3636589 00:02:06.943 14:59:16 -- pm/common@50 -- $ kill -TERM 3636589 00:02:06.943 14:59:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.943 14:59:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:06.943 14:59:16 -- pm/common@44 -- $ pid=3636590 00:02:06.943 14:59:16 -- pm/common@50 -- $ kill -TERM 3636590 00:02:06.943 14:59:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.943 14:59:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:06.943 14:59:16 -- pm/common@44 -- $ pid=3636593 00:02:06.943 14:59:16 -- pm/common@50 -- $ kill -TERM 3636593 00:02:06.943 14:59:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.943 14:59:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:06.943 14:59:16 -- pm/common@44 -- $ pid=3636617 00:02:06.943 14:59:16 -- pm/common@50 -- $ sudo -E kill -TERM 3636617 00:02:06.943 14:59:16 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:06.943 14:59:16 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:06.943 14:59:16 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:06.943 14:59:16 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:06.943 14:59:16 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:06.943 14:59:16 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:06.943 14:59:16 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:06.943 14:59:16 -- scripts/common.sh@336 -- # IFS=.-: 00:02:06.943 14:59:16 -- scripts/common.sh@336 -- # read -ra ver1 00:02:06.943 14:59:16 -- scripts/common.sh@337 -- # IFS=.-: 00:02:06.943 14:59:16 -- scripts/common.sh@337 -- # read -ra ver2 00:02:06.943 14:59:16 -- scripts/common.sh@338 -- # local 'op=<' 00:02:06.943 14:59:16 -- scripts/common.sh@340 -- # ver1_l=2 00:02:06.943 14:59:16 -- scripts/common.sh@341 -- # ver2_l=1 00:02:06.943 14:59:16 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:06.943 14:59:16 -- scripts/common.sh@344 -- # case "$op" in 00:02:06.943 14:59:16 -- scripts/common.sh@345 -- # : 1 00:02:06.943 14:59:16 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:06.943 14:59:16 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:06.943 14:59:16 -- scripts/common.sh@365 -- # decimal 1 00:02:06.943 14:59:16 -- scripts/common.sh@353 -- # local d=1 00:02:06.943 14:59:16 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:06.943 14:59:16 -- scripts/common.sh@355 -- # echo 1 00:02:06.943 14:59:16 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:06.943 14:59:16 -- scripts/common.sh@366 -- # decimal 2 00:02:06.943 14:59:16 -- scripts/common.sh@353 -- # local d=2 00:02:06.943 14:59:16 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:06.943 14:59:16 -- scripts/common.sh@355 -- # echo 2 00:02:06.943 14:59:16 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:06.943 14:59:16 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:06.943 14:59:16 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:06.943 14:59:16 -- scripts/common.sh@368 -- # return 0 00:02:06.943 14:59:16 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:06.943 14:59:16 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:06.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:06.943 --rc genhtml_branch_coverage=1 00:02:06.943 --rc genhtml_function_coverage=1 00:02:06.943 --rc genhtml_legend=1 00:02:06.943 --rc geninfo_all_blocks=1 00:02:06.943 --rc geninfo_unexecuted_blocks=1 00:02:06.943 00:02:06.943 ' 00:02:06.943 14:59:16 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:06.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:06.943 --rc genhtml_branch_coverage=1 00:02:06.943 --rc genhtml_function_coverage=1 00:02:06.943 --rc genhtml_legend=1 00:02:06.943 --rc geninfo_all_blocks=1 00:02:06.943 --rc geninfo_unexecuted_blocks=1 00:02:06.943 00:02:06.943 ' 00:02:06.943 14:59:16 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:06.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:06.943 --rc genhtml_branch_coverage=1 00:02:06.943 --rc genhtml_function_coverage=1 00:02:06.943 --rc genhtml_legend=1 00:02:06.943 --rc geninfo_all_blocks=1 00:02:06.943 --rc geninfo_unexecuted_blocks=1 00:02:06.943 00:02:06.943 ' 00:02:06.943 14:59:16 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:06.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:06.943 --rc genhtml_branch_coverage=1 00:02:06.943 --rc genhtml_function_coverage=1 00:02:06.943 --rc genhtml_legend=1 00:02:06.943 --rc geninfo_all_blocks=1 00:02:06.943 --rc geninfo_unexecuted_blocks=1 00:02:06.943 00:02:06.943 ' 00:02:07.346 14:59:16 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:07.346 14:59:16 -- nvmf/common.sh@7 -- # uname -s 00:02:07.346 14:59:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:07.346 14:59:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:07.346 14:59:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:07.346 14:59:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:07.346 14:59:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:07.346 14:59:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:07.346 14:59:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:07.346 14:59:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:07.346 14:59:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:07.346 14:59:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:07.346 14:59:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:02:07.346 14:59:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:02:07.346 14:59:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:07.346 14:59:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:07.346 14:59:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:07.346 14:59:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:07.346 14:59:16 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:07.346 14:59:16 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:07.346 14:59:16 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:07.346 14:59:16 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.346 14:59:16 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.346 14:59:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.346 14:59:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.346 14:59:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.346 14:59:16 -- paths/export.sh@5 -- # export PATH 00:02:07.346 14:59:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.346 14:59:16 -- nvmf/common.sh@51 -- # : 0 00:02:07.346 14:59:16 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:07.346 14:59:16 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:07.346 14:59:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:07.346 14:59:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:07.346 14:59:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:07.346 14:59:16 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:07.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:07.346 14:59:16 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:07.346 14:59:16 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:07.346 14:59:16 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:07.346 14:59:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:07.346 14:59:16 -- spdk/autotest.sh@32 -- # uname -s 00:02:07.346 14:59:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:07.346 14:59:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:07.346 14:59:16 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:07.346 14:59:16 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:07.346 14:59:16 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:07.346 14:59:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:07.346 14:59:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:07.346 14:59:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:07.346 14:59:16 -- spdk/autotest.sh@48 -- # udevadm_pid=3701782 00:02:07.346 14:59:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:07.346 14:59:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:07.346 14:59:16 -- pm/common@17 -- # local monitor 00:02:07.346 14:59:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.346 14:59:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.346 14:59:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.346 14:59:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.346 14:59:16 -- pm/common@21 -- # date +%s 00:02:07.346 14:59:16 -- pm/common@25 -- # sleep 1 00:02:07.346 14:59:16 -- pm/common@21 -- # date +%s 00:02:07.346 14:59:16 -- pm/common@21 -- # date +%s 00:02:07.346 14:59:16 -- pm/common@21 -- # date +%s 00:02:07.346 14:59:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727787556 00:02:07.346 14:59:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727787556 00:02:07.346 14:59:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727787556 00:02:07.346 14:59:16 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727787556 00:02:07.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727787556_collect-cpu-load.pm.log 00:02:07.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727787556_collect-vmstat.pm.log 00:02:07.346 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727787556_collect-cpu-temp.pm.log 00:02:07.347 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727787556_collect-bmc-pm.bmc.pm.log 00:02:08.340 14:59:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:08.340 14:59:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:08.340 14:59:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:08.340 14:59:17 -- common/autotest_common.sh@10 -- # set +x 00:02:08.340 14:59:17 -- spdk/autotest.sh@59 -- # create_test_list 00:02:08.340 14:59:17 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:08.340 14:59:17 -- common/autotest_common.sh@10 -- # set +x 00:02:08.340 14:59:17 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:08.340 14:59:17 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.340 14:59:17 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.340 14:59:17 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:08.340 14:59:17 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.340 14:59:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:08.340 14:59:17 -- common/autotest_common.sh@1455 -- # uname 00:02:08.340 14:59:17 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:08.340 14:59:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:08.340 14:59:17 -- common/autotest_common.sh@1475 -- # uname 00:02:08.340 14:59:17 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:08.340 14:59:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:08.340 14:59:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:08.340 lcov: LCOV version 1.15 00:02:08.340 14:59:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:18.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:18.340 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:36.457 14:59:43 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:36.457 14:59:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:36.458 14:59:43 -- common/autotest_common.sh@10 -- # set +x 00:02:36.458 14:59:43 -- spdk/autotest.sh@78 -- # rm -f 00:02:36.458 14:59:43 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:37.398 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:37.398 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:37.398 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:37.398 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:37.398 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:37.398 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:37.398 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:37.398 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:37.658 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:37.658 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:37.658 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:37.658 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:37.658 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:37.658 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:37.658 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:37.658 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:37.658 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:37.918 14:59:47 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:37.918 14:59:47 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:37.918 14:59:47 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:37.918 14:59:47 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:37.918 14:59:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:37.918 14:59:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:37.918 14:59:47 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:37.918 14:59:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:37.918 14:59:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:37.918 14:59:47 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:37.918 14:59:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:37.918 14:59:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:37.918 14:59:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:37.918 14:59:47 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:37.918 14:59:47 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:38.177 No valid GPT data, bailing 00:02:38.177 14:59:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:38.177 14:59:47 -- scripts/common.sh@394 -- # pt= 00:02:38.177 14:59:47 -- scripts/common.sh@395 -- # return 1 00:02:38.177 14:59:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:38.177 1+0 records in 00:02:38.177 1+0 records out 00:02:38.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447171 s, 234 MB/s 00:02:38.177 14:59:47 -- spdk/autotest.sh@105 -- # sync 00:02:38.177 14:59:47 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:38.177 14:59:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:38.177 14:59:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:46.310 14:59:56 -- spdk/autotest.sh@111 -- # uname -s 00:02:46.310 14:59:56 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:46.310 14:59:56 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:46.310 14:59:56 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:49.607 Hugepages 00:02:49.607 node hugesize free / total 00:02:49.607 node0 1048576kB 0 / 0 00:02:49.607 node0 2048kB 0 / 0 00:02:49.607 node1 1048576kB 0 / 0 00:02:49.868 node1 2048kB 0 / 0 00:02:49.868 00:02:49.868 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:49.868 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:49.868 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:49.868 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:49.868 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:49.868 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:49.868 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:49.868 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:49.868 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:49.868 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:49.868 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:49.868 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:49.868 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:49.868 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:49.868 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:49.868 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:49.868 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:49.868 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:49.868 14:59:59 -- spdk/autotest.sh@117 -- # uname -s 00:02:49.868 14:59:59 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:49.868 14:59:59 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:49.868 14:59:59 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:54.072 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:02:54.072 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:02:55.457 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:02:55.717 15:00:05 -- common/autotest_common.sh@1515 -- # sleep 1 00:02:56.658 15:00:06 -- common/autotest_common.sh@1516 -- # bdfs=() 00:02:56.658 15:00:06 -- common/autotest_common.sh@1516 -- # local bdfs 00:02:56.658 15:00:06 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:02:56.658 15:00:06 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:02:56.658 15:00:06 -- common/autotest_common.sh@1496 -- # bdfs=() 00:02:56.658 15:00:06 -- common/autotest_common.sh@1496 -- # local bdfs 00:02:56.658 15:00:06 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:56.658 15:00:06 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:02:56.658 15:00:06 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:02:56.658 15:00:06 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:02:56.658 15:00:06 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:02:56.658 15:00:06 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.860 Waiting for block devices as requested 00:03:00.860 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:00.860 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:00.860 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:00.860 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:00.860 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:00.860 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:00.860 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:00.860 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:00.860 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:01.120 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:01.120 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:01.120 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:01.120 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:01.381 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:01.381 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:01.381 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:01.642 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:01.903 15:00:11 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:01.903 15:00:11 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:01.903 15:00:11 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:01.903 15:00:11 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:01.903 15:00:11 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:01.903 15:00:11 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:01.903 15:00:11 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:01.903 15:00:11 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:01.903 15:00:11 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:01.903 15:00:11 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:01.903 15:00:11 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:01.903 15:00:11 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:01.903 15:00:11 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:01.903 15:00:11 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:01.903 15:00:11 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:01.903 15:00:11 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:01.903 15:00:11 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:01.903 15:00:11 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:01.903 15:00:11 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:01.903 15:00:11 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:01.903 15:00:11 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:01.903 15:00:11 -- common/autotest_common.sh@1541 -- # continue 00:03:01.903 15:00:11 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:01.903 15:00:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:01.903 15:00:11 -- common/autotest_common.sh@10 -- # set +x 00:03:01.903 15:00:11 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:01.903 15:00:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:01.903 15:00:11 -- common/autotest_common.sh@10 -- # set +x 00:03:01.903 15:00:11 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.202 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:05.202 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:05.462 15:00:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:05.462 15:00:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:05.462 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:03:05.462 15:00:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:05.462 15:00:15 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:05.462 15:00:15 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:05.462 15:00:15 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:05.462 15:00:15 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:05.462 15:00:15 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:05.462 15:00:15 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:05.462 15:00:15 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:05.462 15:00:15 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:05.462 15:00:15 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:05.462 15:00:15 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:05.462 15:00:15 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:05.462 15:00:15 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:05.462 15:00:15 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:05.462 15:00:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:05.462 15:00:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:05.462 15:00:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:05.462 15:00:15 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:05.462 15:00:15 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:05.462 15:00:15 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:05.462 15:00:15 -- common/autotest_common.sh@1570 -- # return 0 00:03:05.462 15:00:15 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:05.462 15:00:15 -- common/autotest_common.sh@1578 -- # return 0 00:03:05.462 15:00:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:05.462 15:00:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:05.462 15:00:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:05.462 15:00:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:05.462 15:00:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:05.462 15:00:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:05.462 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:03:05.462 15:00:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:05.462 15:00:15 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:05.462 15:00:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:05.462 15:00:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:05.462 15:00:15 -- common/autotest_common.sh@10 -- # set +x 00:03:05.462 ************************************ 00:03:05.462 START TEST env 00:03:05.462 ************************************ 00:03:05.462 15:00:15 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:05.723 * Looking for test storage... 00:03:05.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:05.723 15:00:15 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:05.723 15:00:15 env -- common/autotest_common.sh@1681 -- # lcov --version 00:03:05.723 15:00:15 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:05.723 15:00:15 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:05.723 15:00:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:05.723 15:00:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:05.723 15:00:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:05.723 15:00:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:05.723 15:00:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:05.723 15:00:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:05.723 15:00:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:05.723 15:00:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:05.723 15:00:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:05.723 15:00:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:05.723 15:00:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:05.723 15:00:15 env -- scripts/common.sh@344 -- # case "$op" in 00:03:05.723 15:00:15 env -- scripts/common.sh@345 -- # : 1 00:03:05.723 15:00:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:05.723 15:00:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:05.723 15:00:15 env -- scripts/common.sh@365 -- # decimal 1 00:03:05.723 15:00:15 env -- scripts/common.sh@353 -- # local d=1 00:03:05.723 15:00:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:05.723 15:00:15 env -- scripts/common.sh@355 -- # echo 1 00:03:05.723 15:00:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:05.723 15:00:15 env -- scripts/common.sh@366 -- # decimal 2 00:03:05.723 15:00:15 env -- scripts/common.sh@353 -- # local d=2 00:03:05.723 15:00:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:05.723 15:00:15 env -- scripts/common.sh@355 -- # echo 2 00:03:05.723 15:00:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:05.723 15:00:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:05.723 15:00:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:05.723 15:00:15 env -- scripts/common.sh@368 -- # return 0 00:03:05.723 15:00:15 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:05.723 15:00:15 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:05.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.723 --rc genhtml_branch_coverage=1 00:03:05.723 --rc genhtml_function_coverage=1 00:03:05.723 --rc genhtml_legend=1 00:03:05.723 --rc geninfo_all_blocks=1 00:03:05.723 --rc geninfo_unexecuted_blocks=1 00:03:05.723 00:03:05.723 ' 00:03:05.723 15:00:15 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:05.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.723 --rc genhtml_branch_coverage=1 00:03:05.723 --rc genhtml_function_coverage=1 00:03:05.723 --rc genhtml_legend=1 00:03:05.723 --rc geninfo_all_blocks=1 00:03:05.723 --rc geninfo_unexecuted_blocks=1 00:03:05.723 00:03:05.723 ' 00:03:05.723 15:00:15 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:05.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.723 --rc genhtml_branch_coverage=1 00:03:05.723 --rc genhtml_function_coverage=1 00:03:05.723 --rc genhtml_legend=1 00:03:05.723 --rc geninfo_all_blocks=1 00:03:05.723 --rc geninfo_unexecuted_blocks=1 00:03:05.723 00:03:05.723 ' 00:03:05.723 15:00:15 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:05.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.723 --rc genhtml_branch_coverage=1 00:03:05.723 --rc genhtml_function_coverage=1 00:03:05.723 --rc genhtml_legend=1 00:03:05.723 --rc geninfo_all_blocks=1 00:03:05.723 --rc geninfo_unexecuted_blocks=1 00:03:05.723 00:03:05.723 ' 00:03:05.723 15:00:15 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:05.723 15:00:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:05.723 15:00:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:05.723 15:00:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:05.723 ************************************ 00:03:05.723 START TEST env_memory 00:03:05.723 ************************************ 00:03:05.723 15:00:15 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:05.723 00:03:05.723 00:03:05.723 CUnit - A unit testing framework for C - Version 2.1-3 00:03:05.723 http://cunit.sourceforge.net/ 00:03:05.723 00:03:05.723 00:03:05.723 Suite: memory 00:03:05.723 Test: alloc and free memory map ...[2024-10-01 15:00:15.572132] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:05.985 passed 00:03:05.985 Test: mem map translation ...[2024-10-01 15:00:15.597781] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:05.985 [2024-10-01 15:00:15.597811] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:05.985 [2024-10-01 15:00:15.597858] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:05.985 [2024-10-01 15:00:15.597865] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:05.985 passed 00:03:05.985 Test: mem map registration ...[2024-10-01 15:00:15.653282] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:05.985 [2024-10-01 15:00:15.653308] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:05.985 passed 00:03:05.985 Test: mem map adjacent registrations ...passed 00:03:05.985 00:03:05.985 Run Summary: Type Total Ran Passed Failed Inactive 00:03:05.985 suites 1 1 n/a 0 0 00:03:05.985 tests 4 4 4 0 0 00:03:05.985 asserts 152 152 152 0 n/a 00:03:05.985 00:03:05.985 Elapsed time = 0.191 seconds 00:03:05.985 00:03:05.985 real 0m0.207s 00:03:05.985 user 0m0.192s 00:03:05.985 sys 0m0.014s 00:03:05.985 15:00:15 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:05.985 15:00:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:05.985 ************************************ 00:03:05.985 END TEST env_memory 00:03:05.985 ************************************ 00:03:05.985 15:00:15 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:05.985 15:00:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:05.985 15:00:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:05.985 15:00:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:05.985 ************************************ 00:03:05.985 START TEST env_vtophys 00:03:05.985 ************************************ 00:03:05.985 15:00:15 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:05.985 EAL: lib.eal log level changed from notice to debug 00:03:05.985 EAL: Detected lcore 0 as core 0 on socket 0 00:03:05.985 EAL: Detected lcore 1 as core 1 on socket 0 00:03:05.985 EAL: Detected lcore 2 as core 2 on socket 0 00:03:05.985 EAL: Detected lcore 3 as core 3 on socket 0 00:03:05.985 EAL: Detected lcore 4 as core 4 on socket 0 00:03:05.985 EAL: Detected lcore 5 as core 5 on socket 0 00:03:05.985 EAL: Detected lcore 6 as core 6 on socket 0 00:03:05.985 EAL: Detected lcore 7 as core 7 on socket 0 00:03:05.985 EAL: Detected lcore 8 as core 8 on socket 0 00:03:05.985 EAL: Detected lcore 9 as core 9 on socket 0 00:03:05.985 EAL: Detected lcore 10 as core 10 on socket 0 00:03:05.985 EAL: Detected lcore 11 as core 11 on socket 0 00:03:05.985 EAL: Detected lcore 12 as core 12 on socket 0 00:03:05.985 EAL: Detected lcore 13 as core 13 on socket 0 00:03:05.985 EAL: Detected lcore 14 as core 14 on socket 0 00:03:05.985 EAL: Detected lcore 15 as core 15 on socket 0 00:03:05.985 EAL: Detected lcore 16 as core 16 on socket 0 00:03:05.985 EAL: Detected lcore 17 as core 17 on socket 0 00:03:05.985 EAL: Detected lcore 18 as core 18 on socket 0 00:03:05.985 EAL: Detected lcore 19 as core 19 on socket 0 00:03:05.985 EAL: Detected lcore 20 as core 20 on socket 0 00:03:05.986 EAL: Detected lcore 21 as core 21 on socket 0 00:03:05.986 EAL: Detected lcore 22 as core 22 on socket 0 00:03:05.986 EAL: Detected lcore 23 as core 23 on socket 0 00:03:05.986 EAL: Detected lcore 24 as core 24 on socket 0 00:03:05.986 EAL: Detected lcore 25 as core 25 on socket 0 00:03:05.986 EAL: Detected lcore 26 as core 26 on socket 0 00:03:05.986 EAL: Detected lcore 27 as core 27 on socket 0 00:03:05.986 EAL: Detected lcore 28 as core 28 on socket 0 00:03:05.986 EAL: Detected lcore 29 as core 29 on socket 0 00:03:05.986 EAL: Detected lcore 30 as core 30 on socket 0 00:03:05.986 EAL: Detected lcore 31 as core 31 on socket 0 00:03:05.986 EAL: Detected lcore 32 as core 32 on socket 0 00:03:05.986 EAL: Detected lcore 33 as core 33 on socket 0 00:03:05.986 EAL: Detected lcore 34 as core 34 on socket 0 00:03:05.986 EAL: Detected lcore 35 as core 35 on socket 0 00:03:05.986 EAL: Detected lcore 36 as core 0 on socket 1 00:03:05.986 EAL: Detected lcore 37 as core 1 on socket 1 00:03:05.986 EAL: Detected lcore 38 as core 2 on socket 1 00:03:05.986 EAL: Detected lcore 39 as core 3 on socket 1 00:03:05.986 EAL: Detected lcore 40 as core 4 on socket 1 00:03:05.986 EAL: Detected lcore 41 as core 5 on socket 1 00:03:05.986 EAL: Detected lcore 42 as core 6 on socket 1 00:03:05.986 EAL: Detected lcore 43 as core 7 on socket 1 00:03:05.986 EAL: Detected lcore 44 as core 8 on socket 1 00:03:05.986 EAL: Detected lcore 45 as core 9 on socket 1 00:03:05.986 EAL: Detected lcore 46 as core 10 on socket 1 00:03:05.986 EAL: Detected lcore 47 as core 11 on socket 1 00:03:05.986 EAL: Detected lcore 48 as core 12 on socket 1 00:03:05.986 EAL: Detected lcore 49 as core 13 on socket 1 00:03:05.986 EAL: Detected lcore 50 as core 14 on socket 1 00:03:05.986 EAL: Detected lcore 51 as core 15 on socket 1 00:03:05.986 EAL: Detected lcore 52 as core 16 on socket 1 00:03:05.986 EAL: Detected lcore 53 as core 17 on socket 1 00:03:05.986 EAL: Detected lcore 54 as core 18 on socket 1 00:03:05.986 EAL: Detected lcore 55 as core 19 on socket 1 00:03:05.986 EAL: Detected lcore 56 as core 20 on socket 1 00:03:05.986 EAL: Detected lcore 57 as core 21 on socket 1 00:03:05.986 EAL: Detected lcore 58 as core 22 on socket 1 00:03:05.986 EAL: Detected lcore 59 as core 23 on socket 1 00:03:05.986 EAL: Detected lcore 60 as core 24 on socket 1 00:03:05.986 EAL: Detected lcore 61 as core 25 on socket 1 00:03:05.986 EAL: Detected lcore 62 as core 26 on socket 1 00:03:05.986 EAL: Detected lcore 63 as core 27 on socket 1 00:03:05.986 EAL: Detected lcore 64 as core 28 on socket 1 00:03:05.986 EAL: Detected lcore 65 as core 29 on socket 1 00:03:05.986 EAL: Detected lcore 66 as core 30 on socket 1 00:03:05.986 EAL: Detected lcore 67 as core 31 on socket 1 00:03:05.986 EAL: Detected lcore 68 as core 32 on socket 1 00:03:05.986 EAL: Detected lcore 69 as core 33 on socket 1 00:03:05.986 EAL: Detected lcore 70 as core 34 on socket 1 00:03:05.986 EAL: Detected lcore 71 as core 35 on socket 1 00:03:05.986 EAL: Detected lcore 72 as core 0 on socket 0 00:03:05.986 EAL: Detected lcore 73 as core 1 on socket 0 00:03:05.986 EAL: Detected lcore 74 as core 2 on socket 0 00:03:05.986 EAL: Detected lcore 75 as core 3 on socket 0 00:03:05.986 EAL: Detected lcore 76 as core 4 on socket 0 00:03:05.986 EAL: Detected lcore 77 as core 5 on socket 0 00:03:05.986 EAL: Detected lcore 78 as core 6 on socket 0 00:03:05.986 EAL: Detected lcore 79 as core 7 on socket 0 00:03:05.986 EAL: Detected lcore 80 as core 8 on socket 0 00:03:05.986 EAL: Detected lcore 81 as core 9 on socket 0 00:03:05.986 EAL: Detected lcore 82 as core 10 on socket 0 00:03:05.986 EAL: Detected lcore 83 as core 11 on socket 0 00:03:05.986 EAL: Detected lcore 84 as core 12 on socket 0 00:03:05.986 EAL: Detected lcore 85 as core 13 on socket 0 00:03:05.986 EAL: Detected lcore 86 as core 14 on socket 0 00:03:05.986 EAL: Detected lcore 87 as core 15 on socket 0 00:03:05.986 EAL: Detected lcore 88 as core 16 on socket 0 00:03:05.986 EAL: Detected lcore 89 as core 17 on socket 0 00:03:05.986 EAL: Detected lcore 90 as core 18 on socket 0 00:03:05.986 EAL: Detected lcore 91 as core 19 on socket 0 00:03:05.986 EAL: Detected lcore 92 as core 20 on socket 0 00:03:05.986 EAL: Detected lcore 93 as core 21 on socket 0 00:03:05.986 EAL: Detected lcore 94 as core 22 on socket 0 00:03:05.986 EAL: Detected lcore 95 as core 23 on socket 0 00:03:05.986 EAL: Detected lcore 96 as core 24 on socket 0 00:03:05.986 EAL: Detected lcore 97 as core 25 on socket 0 00:03:05.986 EAL: Detected lcore 98 as core 26 on socket 0 00:03:05.986 EAL: Detected lcore 99 as core 27 on socket 0 00:03:05.986 EAL: Detected lcore 100 as core 28 on socket 0 00:03:05.986 EAL: Detected lcore 101 as core 29 on socket 0 00:03:05.986 EAL: Detected lcore 102 as core 30 on socket 0 00:03:05.986 EAL: Detected lcore 103 as core 31 on socket 0 00:03:05.986 EAL: Detected lcore 104 as core 32 on socket 0 00:03:05.986 EAL: Detected lcore 105 as core 33 on socket 0 00:03:05.986 EAL: Detected lcore 106 as core 34 on socket 0 00:03:05.986 EAL: Detected lcore 107 as core 35 on socket 0 00:03:05.986 EAL: Detected lcore 108 as core 0 on socket 1 00:03:05.986 EAL: Detected lcore 109 as core 1 on socket 1 00:03:05.986 EAL: Detected lcore 110 as core 2 on socket 1 00:03:05.986 EAL: Detected lcore 111 as core 3 on socket 1 00:03:05.986 EAL: Detected lcore 112 as core 4 on socket 1 00:03:05.986 EAL: Detected lcore 113 as core 5 on socket 1 00:03:05.986 EAL: Detected lcore 114 as core 6 on socket 1 00:03:05.986 EAL: Detected lcore 115 as core 7 on socket 1 00:03:05.986 EAL: Detected lcore 116 as core 8 on socket 1 00:03:05.986 EAL: Detected lcore 117 as core 9 on socket 1 00:03:05.986 EAL: Detected lcore 118 as core 10 on socket 1 00:03:05.986 EAL: Detected lcore 119 as core 11 on socket 1 00:03:05.986 EAL: Detected lcore 120 as core 12 on socket 1 00:03:05.986 EAL: Detected lcore 121 as core 13 on socket 1 00:03:05.986 EAL: Detected lcore 122 as core 14 on socket 1 00:03:05.986 EAL: Detected lcore 123 as core 15 on socket 1 00:03:05.986 EAL: Detected lcore 124 as core 16 on socket 1 00:03:05.986 EAL: Detected lcore 125 as core 17 on socket 1 00:03:05.986 EAL: Detected lcore 126 as core 18 on socket 1 00:03:05.986 EAL: Detected lcore 127 as core 19 on socket 1 00:03:05.986 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:05.986 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:05.986 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:05.986 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:05.986 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:05.986 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:05.986 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:05.986 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:05.986 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:05.986 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:05.986 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:05.986 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:05.986 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:05.986 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:05.986 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:05.986 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:05.986 EAL: Maximum logical cores by configuration: 128 00:03:05.986 EAL: Detected CPU lcores: 128 00:03:05.986 EAL: Detected NUMA nodes: 2 00:03:05.986 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:05.986 EAL: Detected shared linkage of DPDK 00:03:05.986 EAL: No shared files mode enabled, IPC will be disabled 00:03:06.248 EAL: Bus pci wants IOVA as 'DC' 00:03:06.248 EAL: Buses did not request a specific IOVA mode. 00:03:06.248 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:06.248 EAL: Selected IOVA mode 'VA' 00:03:06.248 EAL: Probing VFIO support... 00:03:06.248 EAL: IOMMU type 1 (Type 1) is supported 00:03:06.248 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:06.248 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:06.248 EAL: VFIO support initialized 00:03:06.248 EAL: Ask a virtual area of 0x2e000 bytes 00:03:06.248 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:06.248 EAL: Setting up physically contiguous memory... 00:03:06.248 EAL: Setting maximum number of open files to 524288 00:03:06.248 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:06.248 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:06.248 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:06.248 EAL: Ask a virtual area of 0x61000 bytes 00:03:06.248 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:06.248 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:06.248 EAL: Ask a virtual area of 0x400000000 bytes 00:03:06.248 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:06.248 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:06.248 EAL: Ask a virtual area of 0x61000 bytes 00:03:06.248 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:06.248 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:06.248 EAL: Ask a virtual area of 0x400000000 bytes 00:03:06.248 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:06.248 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:06.248 EAL: Ask a virtual area of 0x61000 bytes 00:03:06.248 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:06.248 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:06.248 EAL: Ask a virtual area of 0x400000000 bytes 00:03:06.248 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:06.248 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:06.248 EAL: Ask a virtual area of 0x61000 bytes 00:03:06.248 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:06.248 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:06.248 EAL: Ask a virtual area of 0x400000000 bytes 00:03:06.248 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:06.248 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:06.248 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:06.248 EAL: Ask a virtual area of 0x61000 bytes 00:03:06.248 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:06.248 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:06.248 EAL: Ask a virtual area of 0x400000000 bytes 00:03:06.248 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:06.248 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:06.248 EAL: Ask a virtual area of 0x61000 bytes 00:03:06.248 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:06.248 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:06.248 EAL: Ask a virtual area of 0x400000000 bytes 00:03:06.248 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:06.248 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:06.248 EAL: Ask a virtual area of 0x61000 bytes 00:03:06.248 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:06.248 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:06.248 EAL: Ask a virtual area of 0x400000000 bytes 00:03:06.248 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:06.248 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:06.248 EAL: Ask a virtual area of 0x61000 bytes 00:03:06.248 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:06.248 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:06.248 EAL: Ask a virtual area of 0x400000000 bytes 00:03:06.248 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:06.248 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:06.248 EAL: Hugepages will be freed exactly as allocated. 00:03:06.248 EAL: No shared files mode enabled, IPC is disabled 00:03:06.248 EAL: No shared files mode enabled, IPC is disabled 00:03:06.248 EAL: TSC frequency is ~2400000 KHz 00:03:06.248 EAL: Main lcore 0 is ready (tid=7f009b5d0a00;cpuset=[0]) 00:03:06.248 EAL: Trying to obtain current memory policy. 00:03:06.248 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:06.248 EAL: Restoring previous memory policy: 0 00:03:06.248 EAL: request: mp_malloc_sync 00:03:06.248 EAL: No shared files mode enabled, IPC is disabled 00:03:06.248 EAL: Heap on socket 0 was expanded by 2MB 00:03:06.248 EAL: No shared files mode enabled, IPC is disabled 00:03:06.248 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:06.248 EAL: Mem event callback 'spdk:(nil)' registered 00:03:06.248 00:03:06.248 00:03:06.248 CUnit - A unit testing framework for C - Version 2.1-3 00:03:06.248 http://cunit.sourceforge.net/ 00:03:06.248 00:03:06.248 00:03:06.248 Suite: components_suite 00:03:06.248 Test: vtophys_malloc_test ...passed 00:03:06.249 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:06.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:06.249 EAL: Restoring previous memory policy: 4 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was expanded by 4MB 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was shrunk by 4MB 00:03:06.249 EAL: Trying to obtain current memory policy. 00:03:06.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:06.249 EAL: Restoring previous memory policy: 4 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was expanded by 6MB 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was shrunk by 6MB 00:03:06.249 EAL: Trying to obtain current memory policy. 00:03:06.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:06.249 EAL: Restoring previous memory policy: 4 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was expanded by 10MB 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was shrunk by 10MB 00:03:06.249 EAL: Trying to obtain current memory policy. 00:03:06.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:06.249 EAL: Restoring previous memory policy: 4 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was expanded by 18MB 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was shrunk by 18MB 00:03:06.249 EAL: Trying to obtain current memory policy. 00:03:06.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:06.249 EAL: Restoring previous memory policy: 4 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was expanded by 34MB 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was shrunk by 34MB 00:03:06.249 EAL: Trying to obtain current memory policy. 00:03:06.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:06.249 EAL: Restoring previous memory policy: 4 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was expanded by 66MB 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was shrunk by 66MB 00:03:06.249 EAL: Trying to obtain current memory policy. 00:03:06.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:06.249 EAL: Restoring previous memory policy: 4 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was expanded by 130MB 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was shrunk by 130MB 00:03:06.249 EAL: Trying to obtain current memory policy. 00:03:06.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:06.249 EAL: Restoring previous memory policy: 4 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was expanded by 258MB 00:03:06.249 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.249 EAL: request: mp_malloc_sync 00:03:06.249 EAL: No shared files mode enabled, IPC is disabled 00:03:06.249 EAL: Heap on socket 0 was shrunk by 258MB 00:03:06.249 EAL: Trying to obtain current memory policy. 00:03:06.249 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:06.510 EAL: Restoring previous memory policy: 4 00:03:06.510 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.510 EAL: request: mp_malloc_sync 00:03:06.510 EAL: No shared files mode enabled, IPC is disabled 00:03:06.510 EAL: Heap on socket 0 was expanded by 514MB 00:03:06.510 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.510 EAL: request: mp_malloc_sync 00:03:06.510 EAL: No shared files mode enabled, IPC is disabled 00:03:06.510 EAL: Heap on socket 0 was shrunk by 514MB 00:03:06.510 EAL: Trying to obtain current memory policy. 00:03:06.510 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:06.803 EAL: Restoring previous memory policy: 4 00:03:06.803 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.803 EAL: request: mp_malloc_sync 00:03:06.803 EAL: No shared files mode enabled, IPC is disabled 00:03:06.803 EAL: Heap on socket 0 was expanded by 1026MB 00:03:06.803 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.803 EAL: request: mp_malloc_sync 00:03:06.803 EAL: No shared files mode enabled, IPC is disabled 00:03:06.803 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:06.803 passed 00:03:06.803 00:03:06.803 Run Summary: Type Total Ran Passed Failed Inactive 00:03:06.803 suites 1 1 n/a 0 0 00:03:06.803 tests 2 2 2 0 0 00:03:06.803 asserts 497 497 497 0 n/a 00:03:06.803 00:03:06.803 Elapsed time = 0.655 seconds 00:03:06.803 EAL: Calling mem event callback 'spdk:(nil)' 00:03:06.803 EAL: request: mp_malloc_sync 00:03:06.803 EAL: No shared files mode enabled, IPC is disabled 00:03:06.803 EAL: Heap on socket 0 was shrunk by 2MB 00:03:06.803 EAL: No shared files mode enabled, IPC is disabled 00:03:06.803 EAL: No shared files mode enabled, IPC is disabled 00:03:06.803 EAL: No shared files mode enabled, IPC is disabled 00:03:06.803 00:03:06.803 real 0m0.771s 00:03:06.803 user 0m0.408s 00:03:06.803 sys 0m0.340s 00:03:06.803 15:00:16 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:06.803 15:00:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:06.803 ************************************ 00:03:06.803 END TEST env_vtophys 00:03:06.803 ************************************ 00:03:06.803 15:00:16 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:06.803 15:00:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:06.803 15:00:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:06.803 15:00:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:06.803 ************************************ 00:03:06.803 START TEST env_pci 00:03:06.803 ************************************ 00:03:06.803 15:00:16 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:07.063 00:03:07.063 00:03:07.063 CUnit - A unit testing framework for C - Version 2.1-3 00:03:07.063 http://cunit.sourceforge.net/ 00:03:07.063 00:03:07.063 00:03:07.063 Suite: pci 00:03:07.063 Test: pci_hook ...[2024-10-01 15:00:16.671969] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3720612 has claimed it 00:03:07.063 EAL: Cannot find device (10000:00:01.0) 00:03:07.063 EAL: Failed to attach device on primary process 00:03:07.063 passed 00:03:07.063 00:03:07.063 Run Summary: Type Total Ran Passed Failed Inactive 00:03:07.063 suites 1 1 n/a 0 0 00:03:07.063 tests 1 1 1 0 0 00:03:07.063 asserts 25 25 25 0 n/a 00:03:07.063 00:03:07.063 Elapsed time = 0.029 seconds 00:03:07.063 00:03:07.063 real 0m0.049s 00:03:07.063 user 0m0.014s 00:03:07.063 sys 0m0.034s 00:03:07.063 15:00:16 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:07.063 15:00:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:07.063 ************************************ 00:03:07.063 END TEST env_pci 00:03:07.063 ************************************ 00:03:07.063 15:00:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:07.064 15:00:16 env -- env/env.sh@15 -- # uname 00:03:07.064 15:00:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:07.064 15:00:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:07.064 15:00:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:07.064 15:00:16 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:07.064 15:00:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:07.064 15:00:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:07.064 ************************************ 00:03:07.064 START TEST env_dpdk_post_init 00:03:07.064 ************************************ 00:03:07.064 15:00:16 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:07.064 EAL: Detected CPU lcores: 128 00:03:07.064 EAL: Detected NUMA nodes: 2 00:03:07.064 EAL: Detected shared linkage of DPDK 00:03:07.064 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:07.064 EAL: Selected IOVA mode 'VA' 00:03:07.064 EAL: VFIO support initialized 00:03:07.064 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:07.064 EAL: Using IOMMU type 1 (Type 1) 00:03:07.323 EAL: Ignore mapping IO port bar(1) 00:03:07.323 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:07.582 EAL: Ignore mapping IO port bar(1) 00:03:07.582 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:07.912 EAL: Ignore mapping IO port bar(1) 00:03:07.912 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:07.912 EAL: Ignore mapping IO port bar(1) 00:03:08.173 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:08.173 EAL: Ignore mapping IO port bar(1) 00:03:08.173 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:08.433 EAL: Ignore mapping IO port bar(1) 00:03:08.433 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:08.693 EAL: Ignore mapping IO port bar(1) 00:03:08.693 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:08.953 EAL: Ignore mapping IO port bar(1) 00:03:08.953 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:09.213 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:09.213 EAL: Ignore mapping IO port bar(1) 00:03:09.473 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:09.473 EAL: Ignore mapping IO port bar(1) 00:03:09.733 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:09.733 EAL: Ignore mapping IO port bar(1) 00:03:09.733 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:09.993 EAL: Ignore mapping IO port bar(1) 00:03:09.993 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:10.253 EAL: Ignore mapping IO port bar(1) 00:03:10.253 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:10.513 EAL: Ignore mapping IO port bar(1) 00:03:10.513 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:10.513 EAL: Ignore mapping IO port bar(1) 00:03:10.773 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:10.773 EAL: Ignore mapping IO port bar(1) 00:03:11.032 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:11.032 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:11.032 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:11.032 Starting DPDK initialization... 00:03:11.032 Starting SPDK post initialization... 00:03:11.032 SPDK NVMe probe 00:03:11.032 Attaching to 0000:65:00.0 00:03:11.032 Attached to 0000:65:00.0 00:03:11.032 Cleaning up... 00:03:12.940 00:03:12.940 real 0m5.717s 00:03:12.940 user 0m0.101s 00:03:12.940 sys 0m0.160s 00:03:12.940 15:00:22 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:12.940 15:00:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:12.940 ************************************ 00:03:12.940 END TEST env_dpdk_post_init 00:03:12.940 ************************************ 00:03:12.940 15:00:22 env -- env/env.sh@26 -- # uname 00:03:12.941 15:00:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:12.941 15:00:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:12.941 15:00:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:12.941 15:00:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:12.941 15:00:22 env -- common/autotest_common.sh@10 -- # set +x 00:03:12.941 ************************************ 00:03:12.941 START TEST env_mem_callbacks 00:03:12.941 ************************************ 00:03:12.941 15:00:22 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:12.941 EAL: Detected CPU lcores: 128 00:03:12.941 EAL: Detected NUMA nodes: 2 00:03:12.941 EAL: Detected shared linkage of DPDK 00:03:12.941 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:12.941 EAL: Selected IOVA mode 'VA' 00:03:12.941 EAL: VFIO support initialized 00:03:12.941 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:12.941 00:03:12.941 00:03:12.941 CUnit - A unit testing framework for C - Version 2.1-3 00:03:12.941 http://cunit.sourceforge.net/ 00:03:12.941 00:03:12.941 00:03:12.941 Suite: memory 00:03:12.941 Test: test ... 00:03:12.941 register 0x200000200000 2097152 00:03:12.941 malloc 3145728 00:03:12.941 register 0x200000400000 4194304 00:03:12.941 buf 0x200000500000 len 3145728 PASSED 00:03:12.941 malloc 64 00:03:12.941 buf 0x2000004fff40 len 64 PASSED 00:03:12.941 malloc 4194304 00:03:12.941 register 0x200000800000 6291456 00:03:12.941 buf 0x200000a00000 len 4194304 PASSED 00:03:12.941 free 0x200000500000 3145728 00:03:12.941 free 0x2000004fff40 64 00:03:12.941 unregister 0x200000400000 4194304 PASSED 00:03:12.941 free 0x200000a00000 4194304 00:03:12.941 unregister 0x200000800000 6291456 PASSED 00:03:12.941 malloc 8388608 00:03:12.941 register 0x200000400000 10485760 00:03:12.941 buf 0x200000600000 len 8388608 PASSED 00:03:12.941 free 0x200000600000 8388608 00:03:12.941 unregister 0x200000400000 10485760 PASSED 00:03:12.941 passed 00:03:12.941 00:03:12.941 Run Summary: Type Total Ran Passed Failed Inactive 00:03:12.941 suites 1 1 n/a 0 0 00:03:12.941 tests 1 1 1 0 0 00:03:12.941 asserts 15 15 15 0 n/a 00:03:12.941 00:03:12.941 Elapsed time = 0.006 seconds 00:03:12.941 00:03:12.941 real 0m0.045s 00:03:12.941 user 0m0.013s 00:03:12.941 sys 0m0.032s 00:03:12.941 15:00:22 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:12.941 15:00:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:12.941 ************************************ 00:03:12.941 END TEST env_mem_callbacks 00:03:12.941 ************************************ 00:03:12.941 00:03:12.941 real 0m7.364s 00:03:12.941 user 0m0.980s 00:03:12.941 sys 0m0.937s 00:03:12.941 15:00:22 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:12.941 15:00:22 env -- common/autotest_common.sh@10 -- # set +x 00:03:12.941 ************************************ 00:03:12.941 END TEST env 00:03:12.941 ************************************ 00:03:12.941 15:00:22 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:12.941 15:00:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:12.941 15:00:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:12.941 15:00:22 -- common/autotest_common.sh@10 -- # set +x 00:03:12.941 ************************************ 00:03:12.941 START TEST rpc 00:03:12.941 ************************************ 00:03:12.941 15:00:22 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:13.200 * Looking for test storage... 00:03:13.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:13.200 15:00:22 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:13.200 15:00:22 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:13.200 15:00:22 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:13.200 15:00:22 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:13.200 15:00:22 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:13.200 15:00:22 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:13.200 15:00:22 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:13.200 15:00:22 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:13.200 15:00:22 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:13.200 15:00:22 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:13.200 15:00:22 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:13.200 15:00:22 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:13.200 15:00:22 rpc -- scripts/common.sh@345 -- # : 1 00:03:13.200 15:00:22 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:13.200 15:00:22 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:13.200 15:00:22 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:13.200 15:00:22 rpc -- scripts/common.sh@353 -- # local d=1 00:03:13.200 15:00:22 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:13.200 15:00:22 rpc -- scripts/common.sh@355 -- # echo 1 00:03:13.200 15:00:22 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:13.200 15:00:22 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:13.200 15:00:22 rpc -- scripts/common.sh@353 -- # local d=2 00:03:13.200 15:00:22 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:13.200 15:00:22 rpc -- scripts/common.sh@355 -- # echo 2 00:03:13.200 15:00:22 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:13.200 15:00:22 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:13.200 15:00:22 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:13.200 15:00:22 rpc -- scripts/common.sh@368 -- # return 0 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:13.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.200 --rc genhtml_branch_coverage=1 00:03:13.200 --rc genhtml_function_coverage=1 00:03:13.200 --rc genhtml_legend=1 00:03:13.200 --rc geninfo_all_blocks=1 00:03:13.200 --rc geninfo_unexecuted_blocks=1 00:03:13.200 00:03:13.200 ' 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:13.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.200 --rc genhtml_branch_coverage=1 00:03:13.200 --rc genhtml_function_coverage=1 00:03:13.200 --rc genhtml_legend=1 00:03:13.200 --rc geninfo_all_blocks=1 00:03:13.200 --rc geninfo_unexecuted_blocks=1 00:03:13.200 00:03:13.200 ' 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:13.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.200 --rc genhtml_branch_coverage=1 00:03:13.200 --rc genhtml_function_coverage=1 00:03:13.200 --rc genhtml_legend=1 00:03:13.200 --rc geninfo_all_blocks=1 00:03:13.200 --rc geninfo_unexecuted_blocks=1 00:03:13.200 00:03:13.200 ' 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:13.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:13.200 --rc genhtml_branch_coverage=1 00:03:13.200 --rc genhtml_function_coverage=1 00:03:13.200 --rc genhtml_legend=1 00:03:13.200 --rc geninfo_all_blocks=1 00:03:13.200 --rc geninfo_unexecuted_blocks=1 00:03:13.200 00:03:13.200 ' 00:03:13.200 15:00:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3722069 00:03:13.200 15:00:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:13.200 15:00:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3722069 00:03:13.200 15:00:22 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@831 -- # '[' -z 3722069 ']' 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:13.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:13.200 15:00:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:13.200 [2024-10-01 15:00:22.993632] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:13.200 [2024-10-01 15:00:22.993689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722069 ] 00:03:13.200 [2024-10-01 15:00:23.054196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:13.461 [2024-10-01 15:00:23.118383] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:13.461 [2024-10-01 15:00:23.118424] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3722069' to capture a snapshot of events at runtime. 00:03:13.461 [2024-10-01 15:00:23.118432] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:13.461 [2024-10-01 15:00:23.118438] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:13.461 [2024-10-01 15:00:23.118444] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3722069 for offline analysis/debug. 00:03:13.461 [2024-10-01 15:00:23.118464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:14.031 15:00:23 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:14.031 15:00:23 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:14.031 15:00:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:14.031 15:00:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:14.031 15:00:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:14.031 15:00:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:14.031 15:00:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:14.031 15:00:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:14.031 15:00:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:14.031 ************************************ 00:03:14.031 START TEST rpc_integrity 00:03:14.031 ************************************ 00:03:14.031 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:14.031 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:14.031 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:14.031 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:14.031 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:14.032 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:14.032 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:14.032 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:14.032 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:14.032 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:14.032 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:14.032 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:14.032 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:14.292 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:14.292 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:14.292 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:14.292 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:14.292 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:14.292 { 00:03:14.292 "name": "Malloc0", 00:03:14.292 "aliases": [ 00:03:14.292 "23c5d78d-9fef-4311-b71e-013a20532763" 00:03:14.292 ], 00:03:14.292 "product_name": "Malloc disk", 00:03:14.292 "block_size": 512, 00:03:14.292 "num_blocks": 16384, 00:03:14.292 "uuid": "23c5d78d-9fef-4311-b71e-013a20532763", 00:03:14.292 "assigned_rate_limits": { 00:03:14.292 "rw_ios_per_sec": 0, 00:03:14.292 "rw_mbytes_per_sec": 0, 00:03:14.292 "r_mbytes_per_sec": 0, 00:03:14.292 "w_mbytes_per_sec": 0 00:03:14.292 }, 00:03:14.292 "claimed": false, 00:03:14.292 "zoned": false, 00:03:14.292 "supported_io_types": { 00:03:14.292 "read": true, 00:03:14.292 "write": true, 00:03:14.292 "unmap": true, 00:03:14.292 "flush": true, 00:03:14.292 "reset": true, 00:03:14.292 "nvme_admin": false, 00:03:14.292 "nvme_io": false, 00:03:14.292 "nvme_io_md": false, 00:03:14.292 "write_zeroes": true, 00:03:14.292 "zcopy": true, 00:03:14.292 "get_zone_info": false, 00:03:14.292 "zone_management": false, 00:03:14.292 "zone_append": false, 00:03:14.292 "compare": false, 00:03:14.292 "compare_and_write": false, 00:03:14.292 "abort": true, 00:03:14.292 "seek_hole": false, 00:03:14.292 "seek_data": false, 00:03:14.292 "copy": true, 00:03:14.292 "nvme_iov_md": false 00:03:14.292 }, 00:03:14.292 "memory_domains": [ 00:03:14.292 { 00:03:14.292 "dma_device_id": "system", 00:03:14.292 "dma_device_type": 1 00:03:14.292 }, 00:03:14.292 { 00:03:14.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:14.292 "dma_device_type": 2 00:03:14.292 } 00:03:14.292 ], 00:03:14.292 "driver_specific": {} 00:03:14.292 } 00:03:14.292 ]' 00:03:14.292 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:14.292 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:14.292 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:14.292 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:14.292 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:14.292 [2024-10-01 15:00:23.961515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:14.292 [2024-10-01 15:00:23.961548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:14.292 [2024-10-01 15:00:23.961561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1da11f0 00:03:14.292 [2024-10-01 15:00:23.961568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:14.292 [2024-10-01 15:00:23.962921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:14.292 [2024-10-01 15:00:23.962944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:14.292 Passthru0 00:03:14.292 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:14.292 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:14.292 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:14.292 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:14.292 15:00:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:14.292 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:14.292 { 00:03:14.292 "name": "Malloc0", 00:03:14.292 "aliases": [ 00:03:14.292 "23c5d78d-9fef-4311-b71e-013a20532763" 00:03:14.292 ], 00:03:14.292 "product_name": "Malloc disk", 00:03:14.292 "block_size": 512, 00:03:14.292 "num_blocks": 16384, 00:03:14.292 "uuid": "23c5d78d-9fef-4311-b71e-013a20532763", 00:03:14.292 "assigned_rate_limits": { 00:03:14.292 "rw_ios_per_sec": 0, 00:03:14.293 "rw_mbytes_per_sec": 0, 00:03:14.293 "r_mbytes_per_sec": 0, 00:03:14.293 "w_mbytes_per_sec": 0 00:03:14.293 }, 00:03:14.293 "claimed": true, 00:03:14.293 "claim_type": "exclusive_write", 00:03:14.293 "zoned": false, 00:03:14.293 "supported_io_types": { 00:03:14.293 "read": true, 00:03:14.293 "write": true, 00:03:14.293 "unmap": true, 00:03:14.293 "flush": true, 00:03:14.293 "reset": true, 00:03:14.293 "nvme_admin": false, 00:03:14.293 "nvme_io": false, 00:03:14.293 "nvme_io_md": false, 00:03:14.293 "write_zeroes": true, 00:03:14.293 "zcopy": true, 00:03:14.293 "get_zone_info": false, 00:03:14.293 "zone_management": false, 00:03:14.293 "zone_append": false, 00:03:14.293 "compare": false, 00:03:14.293 "compare_and_write": false, 00:03:14.293 "abort": true, 00:03:14.293 "seek_hole": false, 00:03:14.293 "seek_data": false, 00:03:14.293 "copy": true, 00:03:14.293 "nvme_iov_md": false 00:03:14.293 }, 00:03:14.293 "memory_domains": [ 00:03:14.293 { 00:03:14.293 "dma_device_id": "system", 00:03:14.293 "dma_device_type": 1 00:03:14.293 }, 00:03:14.293 { 00:03:14.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:14.293 "dma_device_type": 2 00:03:14.293 } 00:03:14.293 ], 00:03:14.293 "driver_specific": {} 00:03:14.293 }, 00:03:14.293 { 00:03:14.293 "name": "Passthru0", 00:03:14.293 "aliases": [ 00:03:14.293 "ef6a8170-ba7b-5f73-b880-8884756e7cd5" 00:03:14.293 ], 00:03:14.293 "product_name": "passthru", 00:03:14.293 "block_size": 512, 00:03:14.293 "num_blocks": 16384, 00:03:14.293 "uuid": "ef6a8170-ba7b-5f73-b880-8884756e7cd5", 00:03:14.293 "assigned_rate_limits": { 00:03:14.293 "rw_ios_per_sec": 0, 00:03:14.293 "rw_mbytes_per_sec": 0, 00:03:14.293 "r_mbytes_per_sec": 0, 00:03:14.293 "w_mbytes_per_sec": 0 00:03:14.293 }, 00:03:14.293 "claimed": false, 00:03:14.293 "zoned": false, 00:03:14.293 "supported_io_types": { 00:03:14.293 "read": true, 00:03:14.293 "write": true, 00:03:14.293 "unmap": true, 00:03:14.293 "flush": true, 00:03:14.293 "reset": true, 00:03:14.293 "nvme_admin": false, 00:03:14.293 "nvme_io": false, 00:03:14.293 "nvme_io_md": false, 00:03:14.293 "write_zeroes": true, 00:03:14.293 "zcopy": true, 00:03:14.293 "get_zone_info": false, 00:03:14.293 "zone_management": false, 00:03:14.293 "zone_append": false, 00:03:14.293 "compare": false, 00:03:14.293 "compare_and_write": false, 00:03:14.293 "abort": true, 00:03:14.293 "seek_hole": false, 00:03:14.293 "seek_data": false, 00:03:14.293 "copy": true, 00:03:14.293 "nvme_iov_md": false 00:03:14.293 }, 00:03:14.293 "memory_domains": [ 00:03:14.293 { 00:03:14.293 "dma_device_id": "system", 00:03:14.293 "dma_device_type": 1 00:03:14.293 }, 00:03:14.293 { 00:03:14.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:14.293 "dma_device_type": 2 00:03:14.293 } 00:03:14.293 ], 00:03:14.293 "driver_specific": { 00:03:14.293 "passthru": { 00:03:14.293 "name": "Passthru0", 00:03:14.293 "base_bdev_name": "Malloc0" 00:03:14.293 } 00:03:14.293 } 00:03:14.293 } 00:03:14.293 ]' 00:03:14.293 15:00:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:14.293 15:00:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:14.293 15:00:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:14.293 15:00:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:14.293 15:00:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:14.293 15:00:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:14.293 15:00:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:14.293 15:00:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:14.293 15:00:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:14.293 15:00:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:14.293 15:00:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:14.293 15:00:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:14.293 15:00:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:14.293 15:00:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:14.293 15:00:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:14.293 15:00:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:14.293 15:00:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:14.293 00:03:14.293 real 0m0.300s 00:03:14.293 user 0m0.194s 00:03:14.293 sys 0m0.035s 00:03:14.293 15:00:24 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:14.293 15:00:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:14.293 ************************************ 00:03:14.293 END TEST rpc_integrity 00:03:14.293 ************************************ 00:03:14.553 15:00:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:14.553 15:00:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:14.553 15:00:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:14.553 15:00:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:14.553 ************************************ 00:03:14.553 START TEST rpc_plugins 00:03:14.553 ************************************ 00:03:14.553 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:14.553 15:00:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:14.553 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:14.553 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:14.553 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:14.553 15:00:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:14.553 15:00:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:14.553 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:14.553 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:14.553 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:14.553 15:00:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:14.553 { 00:03:14.553 "name": "Malloc1", 00:03:14.553 "aliases": [ 00:03:14.554 "8dbafa88-4dd1-4e4a-b54a-4ab0d99bf2d4" 00:03:14.554 ], 00:03:14.554 "product_name": "Malloc disk", 00:03:14.554 "block_size": 4096, 00:03:14.554 "num_blocks": 256, 00:03:14.554 "uuid": "8dbafa88-4dd1-4e4a-b54a-4ab0d99bf2d4", 00:03:14.554 "assigned_rate_limits": { 00:03:14.554 "rw_ios_per_sec": 0, 00:03:14.554 "rw_mbytes_per_sec": 0, 00:03:14.554 "r_mbytes_per_sec": 0, 00:03:14.554 "w_mbytes_per_sec": 0 00:03:14.554 }, 00:03:14.554 "claimed": false, 00:03:14.554 "zoned": false, 00:03:14.554 "supported_io_types": { 00:03:14.554 "read": true, 00:03:14.554 "write": true, 00:03:14.554 "unmap": true, 00:03:14.554 "flush": true, 00:03:14.554 "reset": true, 00:03:14.554 "nvme_admin": false, 00:03:14.554 "nvme_io": false, 00:03:14.554 "nvme_io_md": false, 00:03:14.554 "write_zeroes": true, 00:03:14.554 "zcopy": true, 00:03:14.554 "get_zone_info": false, 00:03:14.554 "zone_management": false, 00:03:14.554 "zone_append": false, 00:03:14.554 "compare": false, 00:03:14.554 "compare_and_write": false, 00:03:14.554 "abort": true, 00:03:14.554 "seek_hole": false, 00:03:14.554 "seek_data": false, 00:03:14.554 "copy": true, 00:03:14.554 "nvme_iov_md": false 00:03:14.554 }, 00:03:14.554 "memory_domains": [ 00:03:14.554 { 00:03:14.554 "dma_device_id": "system", 00:03:14.554 "dma_device_type": 1 00:03:14.554 }, 00:03:14.554 { 00:03:14.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:14.554 "dma_device_type": 2 00:03:14.554 } 00:03:14.554 ], 00:03:14.554 "driver_specific": {} 00:03:14.554 } 00:03:14.554 ]' 00:03:14.554 15:00:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:14.554 15:00:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:14.554 15:00:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:14.554 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:14.554 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:14.554 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:14.554 15:00:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:14.554 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:14.554 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:14.554 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:14.554 15:00:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:14.554 15:00:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:14.554 15:00:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:14.554 00:03:14.554 real 0m0.149s 00:03:14.554 user 0m0.094s 00:03:14.554 sys 0m0.019s 00:03:14.554 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:14.554 15:00:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:14.554 ************************************ 00:03:14.554 END TEST rpc_plugins 00:03:14.554 ************************************ 00:03:14.554 15:00:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:14.554 15:00:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:14.554 15:00:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:14.554 15:00:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:14.554 ************************************ 00:03:14.554 START TEST rpc_trace_cmd_test 00:03:14.554 ************************************ 00:03:14.554 15:00:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:14.554 15:00:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:14.554 15:00:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:14.554 15:00:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:14.554 15:00:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:14.814 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3722069", 00:03:14.814 "tpoint_group_mask": "0x8", 00:03:14.814 "iscsi_conn": { 00:03:14.814 "mask": "0x2", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "scsi": { 00:03:14.814 "mask": "0x4", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "bdev": { 00:03:14.814 "mask": "0x8", 00:03:14.814 "tpoint_mask": "0xffffffffffffffff" 00:03:14.814 }, 00:03:14.814 "nvmf_rdma": { 00:03:14.814 "mask": "0x10", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "nvmf_tcp": { 00:03:14.814 "mask": "0x20", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "ftl": { 00:03:14.814 "mask": "0x40", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "blobfs": { 00:03:14.814 "mask": "0x80", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "dsa": { 00:03:14.814 "mask": "0x200", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "thread": { 00:03:14.814 "mask": "0x400", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "nvme_pcie": { 00:03:14.814 "mask": "0x800", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "iaa": { 00:03:14.814 "mask": "0x1000", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "nvme_tcp": { 00:03:14.814 "mask": "0x2000", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "bdev_nvme": { 00:03:14.814 "mask": "0x4000", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "sock": { 00:03:14.814 "mask": "0x8000", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "blob": { 00:03:14.814 "mask": "0x10000", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 }, 00:03:14.814 "bdev_raid": { 00:03:14.814 "mask": "0x20000", 00:03:14.814 "tpoint_mask": "0x0" 00:03:14.814 } 00:03:14.814 }' 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:14.814 00:03:14.814 real 0m0.232s 00:03:14.814 user 0m0.192s 00:03:14.814 sys 0m0.030s 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:14.814 15:00:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:14.814 ************************************ 00:03:14.814 END TEST rpc_trace_cmd_test 00:03:14.814 ************************************ 00:03:14.814 15:00:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:14.814 15:00:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:14.814 15:00:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:14.814 15:00:24 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:14.814 15:00:24 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:14.814 15:00:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:15.075 ************************************ 00:03:15.075 START TEST rpc_daemon_integrity 00:03:15.075 ************************************ 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:15.075 { 00:03:15.075 "name": "Malloc2", 00:03:15.075 "aliases": [ 00:03:15.075 "360ef91a-ed7c-430d-8ec3-12acc9b563fc" 00:03:15.075 ], 00:03:15.075 "product_name": "Malloc disk", 00:03:15.075 "block_size": 512, 00:03:15.075 "num_blocks": 16384, 00:03:15.075 "uuid": "360ef91a-ed7c-430d-8ec3-12acc9b563fc", 00:03:15.075 "assigned_rate_limits": { 00:03:15.075 "rw_ios_per_sec": 0, 00:03:15.075 "rw_mbytes_per_sec": 0, 00:03:15.075 "r_mbytes_per_sec": 0, 00:03:15.075 "w_mbytes_per_sec": 0 00:03:15.075 }, 00:03:15.075 "claimed": false, 00:03:15.075 "zoned": false, 00:03:15.075 "supported_io_types": { 00:03:15.075 "read": true, 00:03:15.075 "write": true, 00:03:15.075 "unmap": true, 00:03:15.075 "flush": true, 00:03:15.075 "reset": true, 00:03:15.075 "nvme_admin": false, 00:03:15.075 "nvme_io": false, 00:03:15.075 "nvme_io_md": false, 00:03:15.075 "write_zeroes": true, 00:03:15.075 "zcopy": true, 00:03:15.075 "get_zone_info": false, 00:03:15.075 "zone_management": false, 00:03:15.075 "zone_append": false, 00:03:15.075 "compare": false, 00:03:15.075 "compare_and_write": false, 00:03:15.075 "abort": true, 00:03:15.075 "seek_hole": false, 00:03:15.075 "seek_data": false, 00:03:15.075 "copy": true, 00:03:15.075 "nvme_iov_md": false 00:03:15.075 }, 00:03:15.075 "memory_domains": [ 00:03:15.075 { 00:03:15.075 "dma_device_id": "system", 00:03:15.075 "dma_device_type": 1 00:03:15.075 }, 00:03:15.075 { 00:03:15.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:15.075 "dma_device_type": 2 00:03:15.075 } 00:03:15.075 ], 00:03:15.075 "driver_specific": {} 00:03:15.075 } 00:03:15.075 ]' 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:15.075 [2024-10-01 15:00:24.856011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:15.075 [2024-10-01 15:00:24.856041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:15.075 [2024-10-01 15:00:24.856054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ed2d30 00:03:15.075 [2024-10-01 15:00:24.856061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:15.075 [2024-10-01 15:00:24.857366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:15.075 [2024-10-01 15:00:24.857386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:15.075 Passthru0 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:15.075 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:15.075 { 00:03:15.075 "name": "Malloc2", 00:03:15.075 "aliases": [ 00:03:15.075 "360ef91a-ed7c-430d-8ec3-12acc9b563fc" 00:03:15.075 ], 00:03:15.075 "product_name": "Malloc disk", 00:03:15.075 "block_size": 512, 00:03:15.075 "num_blocks": 16384, 00:03:15.075 "uuid": "360ef91a-ed7c-430d-8ec3-12acc9b563fc", 00:03:15.075 "assigned_rate_limits": { 00:03:15.075 "rw_ios_per_sec": 0, 00:03:15.076 "rw_mbytes_per_sec": 0, 00:03:15.076 "r_mbytes_per_sec": 0, 00:03:15.076 "w_mbytes_per_sec": 0 00:03:15.076 }, 00:03:15.076 "claimed": true, 00:03:15.076 "claim_type": "exclusive_write", 00:03:15.076 "zoned": false, 00:03:15.076 "supported_io_types": { 00:03:15.076 "read": true, 00:03:15.076 "write": true, 00:03:15.076 "unmap": true, 00:03:15.076 "flush": true, 00:03:15.076 "reset": true, 00:03:15.076 "nvme_admin": false, 00:03:15.076 "nvme_io": false, 00:03:15.076 "nvme_io_md": false, 00:03:15.076 "write_zeroes": true, 00:03:15.076 "zcopy": true, 00:03:15.076 "get_zone_info": false, 00:03:15.076 "zone_management": false, 00:03:15.076 "zone_append": false, 00:03:15.076 "compare": false, 00:03:15.076 "compare_and_write": false, 00:03:15.076 "abort": true, 00:03:15.076 "seek_hole": false, 00:03:15.076 "seek_data": false, 00:03:15.076 "copy": true, 00:03:15.076 "nvme_iov_md": false 00:03:15.076 }, 00:03:15.076 "memory_domains": [ 00:03:15.076 { 00:03:15.076 "dma_device_id": "system", 00:03:15.076 "dma_device_type": 1 00:03:15.076 }, 00:03:15.076 { 00:03:15.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:15.076 "dma_device_type": 2 00:03:15.076 } 00:03:15.076 ], 00:03:15.076 "driver_specific": {} 00:03:15.076 }, 00:03:15.076 { 00:03:15.076 "name": "Passthru0", 00:03:15.076 "aliases": [ 00:03:15.076 "e54b0102-439e-5361-8fa2-059655e23fa4" 00:03:15.076 ], 00:03:15.076 "product_name": "passthru", 00:03:15.076 "block_size": 512, 00:03:15.076 "num_blocks": 16384, 00:03:15.076 "uuid": "e54b0102-439e-5361-8fa2-059655e23fa4", 00:03:15.076 "assigned_rate_limits": { 00:03:15.076 "rw_ios_per_sec": 0, 00:03:15.076 "rw_mbytes_per_sec": 0, 00:03:15.076 "r_mbytes_per_sec": 0, 00:03:15.076 "w_mbytes_per_sec": 0 00:03:15.076 }, 00:03:15.076 "claimed": false, 00:03:15.076 "zoned": false, 00:03:15.076 "supported_io_types": { 00:03:15.076 "read": true, 00:03:15.076 "write": true, 00:03:15.076 "unmap": true, 00:03:15.076 "flush": true, 00:03:15.076 "reset": true, 00:03:15.076 "nvme_admin": false, 00:03:15.076 "nvme_io": false, 00:03:15.076 "nvme_io_md": false, 00:03:15.076 "write_zeroes": true, 00:03:15.076 "zcopy": true, 00:03:15.076 "get_zone_info": false, 00:03:15.076 "zone_management": false, 00:03:15.076 "zone_append": false, 00:03:15.076 "compare": false, 00:03:15.076 "compare_and_write": false, 00:03:15.076 "abort": true, 00:03:15.076 "seek_hole": false, 00:03:15.076 "seek_data": false, 00:03:15.076 "copy": true, 00:03:15.076 "nvme_iov_md": false 00:03:15.076 }, 00:03:15.076 "memory_domains": [ 00:03:15.076 { 00:03:15.076 "dma_device_id": "system", 00:03:15.076 "dma_device_type": 1 00:03:15.076 }, 00:03:15.076 { 00:03:15.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:15.076 "dma_device_type": 2 00:03:15.076 } 00:03:15.076 ], 00:03:15.076 "driver_specific": { 00:03:15.076 "passthru": { 00:03:15.076 "name": "Passthru0", 00:03:15.076 "base_bdev_name": "Malloc2" 00:03:15.076 } 00:03:15.076 } 00:03:15.076 } 00:03:15.076 ]' 00:03:15.076 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:15.076 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:15.336 15:00:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:15.336 15:00:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:15.336 00:03:15.336 real 0m0.303s 00:03:15.336 user 0m0.189s 00:03:15.336 sys 0m0.046s 00:03:15.336 15:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:15.336 15:00:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:15.336 ************************************ 00:03:15.336 END TEST rpc_daemon_integrity 00:03:15.336 ************************************ 00:03:15.336 15:00:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:15.336 15:00:25 rpc -- rpc/rpc.sh@84 -- # killprocess 3722069 00:03:15.336 15:00:25 rpc -- common/autotest_common.sh@950 -- # '[' -z 3722069 ']' 00:03:15.336 15:00:25 rpc -- common/autotest_common.sh@954 -- # kill -0 3722069 00:03:15.336 15:00:25 rpc -- common/autotest_common.sh@955 -- # uname 00:03:15.336 15:00:25 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:15.336 15:00:25 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3722069 00:03:15.336 15:00:25 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:15.336 15:00:25 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:15.336 15:00:25 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3722069' 00:03:15.336 killing process with pid 3722069 00:03:15.336 15:00:25 rpc -- common/autotest_common.sh@969 -- # kill 3722069 00:03:15.336 15:00:25 rpc -- common/autotest_common.sh@974 -- # wait 3722069 00:03:15.596 00:03:15.596 real 0m2.613s 00:03:15.596 user 0m3.387s 00:03:15.596 sys 0m0.727s 00:03:15.596 15:00:25 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:15.596 15:00:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:15.596 ************************************ 00:03:15.596 END TEST rpc 00:03:15.596 ************************************ 00:03:15.596 15:00:25 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:15.596 15:00:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:15.596 15:00:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:15.596 15:00:25 -- common/autotest_common.sh@10 -- # set +x 00:03:15.596 ************************************ 00:03:15.596 START TEST skip_rpc 00:03:15.596 ************************************ 00:03:15.596 15:00:25 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:15.856 * Looking for test storage... 00:03:15.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:15.856 15:00:25 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:15.856 15:00:25 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:15.856 15:00:25 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:15.856 15:00:25 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:15.856 15:00:25 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:15.857 15:00:25 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:15.857 15:00:25 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.857 15:00:25 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:15.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.857 --rc genhtml_branch_coverage=1 00:03:15.857 --rc genhtml_function_coverage=1 00:03:15.857 --rc genhtml_legend=1 00:03:15.857 --rc geninfo_all_blocks=1 00:03:15.857 --rc geninfo_unexecuted_blocks=1 00:03:15.857 00:03:15.857 ' 00:03:15.857 15:00:25 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:15.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.857 --rc genhtml_branch_coverage=1 00:03:15.857 --rc genhtml_function_coverage=1 00:03:15.857 --rc genhtml_legend=1 00:03:15.857 --rc geninfo_all_blocks=1 00:03:15.857 --rc geninfo_unexecuted_blocks=1 00:03:15.857 00:03:15.857 ' 00:03:15.857 15:00:25 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:15.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.857 --rc genhtml_branch_coverage=1 00:03:15.857 --rc genhtml_function_coverage=1 00:03:15.857 --rc genhtml_legend=1 00:03:15.857 --rc geninfo_all_blocks=1 00:03:15.857 --rc geninfo_unexecuted_blocks=1 00:03:15.857 00:03:15.857 ' 00:03:15.857 15:00:25 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:15.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.857 --rc genhtml_branch_coverage=1 00:03:15.857 --rc genhtml_function_coverage=1 00:03:15.857 --rc genhtml_legend=1 00:03:15.857 --rc geninfo_all_blocks=1 00:03:15.857 --rc geninfo_unexecuted_blocks=1 00:03:15.857 00:03:15.857 ' 00:03:15.857 15:00:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:15.857 15:00:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:15.857 15:00:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:15.857 15:00:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:15.857 15:00:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:15.857 15:00:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:15.857 ************************************ 00:03:15.857 START TEST skip_rpc 00:03:15.857 ************************************ 00:03:15.857 15:00:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:15.857 15:00:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3722699 00:03:15.857 15:00:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:15.857 15:00:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:15.857 15:00:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:15.857 [2024-10-01 15:00:25.691485] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:15.857 [2024-10-01 15:00:25.691544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722699 ] 00:03:16.117 [2024-10-01 15:00:25.756630] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:16.117 [2024-10-01 15:00:25.832514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3722699 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3722699 ']' 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3722699 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3722699 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3722699' 00:03:21.578 killing process with pid 3722699 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3722699 00:03:21.578 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3722699 00:03:21.579 00:03:21.579 real 0m5.284s 00:03:21.579 user 0m5.075s 00:03:21.579 sys 0m0.231s 00:03:21.579 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:21.579 15:00:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:21.579 ************************************ 00:03:21.579 END TEST skip_rpc 00:03:21.579 ************************************ 00:03:21.579 15:00:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:21.579 15:00:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:21.579 15:00:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:21.579 15:00:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:21.579 ************************************ 00:03:21.579 START TEST skip_rpc_with_json 00:03:21.579 ************************************ 00:03:21.579 15:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:21.579 15:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:21.579 15:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3723908 00:03:21.579 15:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:21.579 15:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3723908 00:03:21.579 15:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:21.579 15:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3723908 ']' 00:03:21.579 15:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:21.579 15:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:21.579 15:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:21.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:21.579 15:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:21.579 15:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:21.579 [2024-10-01 15:00:31.047595] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:21.579 [2024-10-01 15:00:31.047648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723908 ] 00:03:21.579 [2024-10-01 15:00:31.111414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:21.579 [2024-10-01 15:00:31.183231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:22.148 15:00:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:22.148 15:00:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:22.148 15:00:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:22.148 15:00:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:22.148 15:00:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:22.148 [2024-10-01 15:00:31.833131] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:22.148 request: 00:03:22.148 { 00:03:22.148 "trtype": "tcp", 00:03:22.148 "method": "nvmf_get_transports", 00:03:22.148 "req_id": 1 00:03:22.148 } 00:03:22.148 Got JSON-RPC error response 00:03:22.148 response: 00:03:22.148 { 00:03:22.148 "code": -19, 00:03:22.148 "message": "No such device" 00:03:22.148 } 00:03:22.148 15:00:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:22.148 15:00:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:22.148 15:00:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:22.148 15:00:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:22.148 [2024-10-01 15:00:31.845258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:22.148 15:00:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:22.148 15:00:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:22.148 15:00:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:22.148 15:00:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:22.407 15:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:22.408 15:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:22.408 { 00:03:22.408 "subsystems": [ 00:03:22.408 { 00:03:22.408 "subsystem": "fsdev", 00:03:22.408 "config": [ 00:03:22.408 { 00:03:22.408 "method": "fsdev_set_opts", 00:03:22.408 "params": { 00:03:22.408 "fsdev_io_pool_size": 65535, 00:03:22.408 "fsdev_io_cache_size": 256 00:03:22.408 } 00:03:22.408 } 00:03:22.408 ] 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "vfio_user_target", 00:03:22.408 "config": null 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "keyring", 00:03:22.408 "config": [] 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "iobuf", 00:03:22.408 "config": [ 00:03:22.408 { 00:03:22.408 "method": "iobuf_set_options", 00:03:22.408 "params": { 00:03:22.408 "small_pool_count": 8192, 00:03:22.408 "large_pool_count": 1024, 00:03:22.408 "small_bufsize": 8192, 00:03:22.408 "large_bufsize": 135168 00:03:22.408 } 00:03:22.408 } 00:03:22.408 ] 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "sock", 00:03:22.408 "config": [ 00:03:22.408 { 00:03:22.408 "method": "sock_set_default_impl", 00:03:22.408 "params": { 00:03:22.408 "impl_name": "posix" 00:03:22.408 } 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "method": "sock_impl_set_options", 00:03:22.408 "params": { 00:03:22.408 "impl_name": "ssl", 00:03:22.408 "recv_buf_size": 4096, 00:03:22.408 "send_buf_size": 4096, 00:03:22.408 "enable_recv_pipe": true, 00:03:22.408 "enable_quickack": false, 00:03:22.408 "enable_placement_id": 0, 00:03:22.408 "enable_zerocopy_send_server": true, 00:03:22.408 "enable_zerocopy_send_client": false, 00:03:22.408 "zerocopy_threshold": 0, 00:03:22.408 "tls_version": 0, 00:03:22.408 "enable_ktls": false 00:03:22.408 } 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "method": "sock_impl_set_options", 00:03:22.408 "params": { 00:03:22.408 "impl_name": "posix", 00:03:22.408 "recv_buf_size": 2097152, 00:03:22.408 "send_buf_size": 2097152, 00:03:22.408 "enable_recv_pipe": true, 00:03:22.408 "enable_quickack": false, 00:03:22.408 "enable_placement_id": 0, 00:03:22.408 "enable_zerocopy_send_server": true, 00:03:22.408 "enable_zerocopy_send_client": false, 00:03:22.408 "zerocopy_threshold": 0, 00:03:22.408 "tls_version": 0, 00:03:22.408 "enable_ktls": false 00:03:22.408 } 00:03:22.408 } 00:03:22.408 ] 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "vmd", 00:03:22.408 "config": [] 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "accel", 00:03:22.408 "config": [ 00:03:22.408 { 00:03:22.408 "method": "accel_set_options", 00:03:22.408 "params": { 00:03:22.408 "small_cache_size": 128, 00:03:22.408 "large_cache_size": 16, 00:03:22.408 "task_count": 2048, 00:03:22.408 "sequence_count": 2048, 00:03:22.408 "buf_count": 2048 00:03:22.408 } 00:03:22.408 } 00:03:22.408 ] 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "bdev", 00:03:22.408 "config": [ 00:03:22.408 { 00:03:22.408 "method": "bdev_set_options", 00:03:22.408 "params": { 00:03:22.408 "bdev_io_pool_size": 65535, 00:03:22.408 "bdev_io_cache_size": 256, 00:03:22.408 "bdev_auto_examine": true, 00:03:22.408 "iobuf_small_cache_size": 128, 00:03:22.408 "iobuf_large_cache_size": 16 00:03:22.408 } 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "method": "bdev_raid_set_options", 00:03:22.408 "params": { 00:03:22.408 "process_window_size_kb": 1024, 00:03:22.408 "process_max_bandwidth_mb_sec": 0 00:03:22.408 } 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "method": "bdev_iscsi_set_options", 00:03:22.408 "params": { 00:03:22.408 "timeout_sec": 30 00:03:22.408 } 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "method": "bdev_nvme_set_options", 00:03:22.408 "params": { 00:03:22.408 "action_on_timeout": "none", 00:03:22.408 "timeout_us": 0, 00:03:22.408 "timeout_admin_us": 0, 00:03:22.408 "keep_alive_timeout_ms": 10000, 00:03:22.408 "arbitration_burst": 0, 00:03:22.408 "low_priority_weight": 0, 00:03:22.408 "medium_priority_weight": 0, 00:03:22.408 "high_priority_weight": 0, 00:03:22.408 "nvme_adminq_poll_period_us": 10000, 00:03:22.408 "nvme_ioq_poll_period_us": 0, 00:03:22.408 "io_queue_requests": 0, 00:03:22.408 "delay_cmd_submit": true, 00:03:22.408 "transport_retry_count": 4, 00:03:22.408 "bdev_retry_count": 3, 00:03:22.408 "transport_ack_timeout": 0, 00:03:22.408 "ctrlr_loss_timeout_sec": 0, 00:03:22.408 "reconnect_delay_sec": 0, 00:03:22.408 "fast_io_fail_timeout_sec": 0, 00:03:22.408 "disable_auto_failback": false, 00:03:22.408 "generate_uuids": false, 00:03:22.408 "transport_tos": 0, 00:03:22.408 "nvme_error_stat": false, 00:03:22.408 "rdma_srq_size": 0, 00:03:22.408 "io_path_stat": false, 00:03:22.408 "allow_accel_sequence": false, 00:03:22.408 "rdma_max_cq_size": 0, 00:03:22.408 "rdma_cm_event_timeout_ms": 0, 00:03:22.408 "dhchap_digests": [ 00:03:22.408 "sha256", 00:03:22.408 "sha384", 00:03:22.408 "sha512" 00:03:22.408 ], 00:03:22.408 "dhchap_dhgroups": [ 00:03:22.408 "null", 00:03:22.408 "ffdhe2048", 00:03:22.408 "ffdhe3072", 00:03:22.408 "ffdhe4096", 00:03:22.408 "ffdhe6144", 00:03:22.408 "ffdhe8192" 00:03:22.408 ] 00:03:22.408 } 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "method": "bdev_nvme_set_hotplug", 00:03:22.408 "params": { 00:03:22.408 "period_us": 100000, 00:03:22.408 "enable": false 00:03:22.408 } 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "method": "bdev_wait_for_examine" 00:03:22.408 } 00:03:22.408 ] 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "scsi", 00:03:22.408 "config": null 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "scheduler", 00:03:22.408 "config": [ 00:03:22.408 { 00:03:22.408 "method": "framework_set_scheduler", 00:03:22.408 "params": { 00:03:22.408 "name": "static" 00:03:22.408 } 00:03:22.408 } 00:03:22.408 ] 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "vhost_scsi", 00:03:22.408 "config": [] 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "vhost_blk", 00:03:22.408 "config": [] 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "ublk", 00:03:22.408 "config": [] 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "nbd", 00:03:22.408 "config": [] 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "nvmf", 00:03:22.408 "config": [ 00:03:22.408 { 00:03:22.408 "method": "nvmf_set_config", 00:03:22.408 "params": { 00:03:22.408 "discovery_filter": "match_any", 00:03:22.408 "admin_cmd_passthru": { 00:03:22.408 "identify_ctrlr": false 00:03:22.408 }, 00:03:22.408 "dhchap_digests": [ 00:03:22.408 "sha256", 00:03:22.408 "sha384", 00:03:22.408 "sha512" 00:03:22.408 ], 00:03:22.408 "dhchap_dhgroups": [ 00:03:22.408 "null", 00:03:22.408 "ffdhe2048", 00:03:22.408 "ffdhe3072", 00:03:22.408 "ffdhe4096", 00:03:22.408 "ffdhe6144", 00:03:22.408 "ffdhe8192" 00:03:22.408 ] 00:03:22.408 } 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "method": "nvmf_set_max_subsystems", 00:03:22.408 "params": { 00:03:22.408 "max_subsystems": 1024 00:03:22.408 } 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "method": "nvmf_set_crdt", 00:03:22.408 "params": { 00:03:22.408 "crdt1": 0, 00:03:22.408 "crdt2": 0, 00:03:22.408 "crdt3": 0 00:03:22.408 } 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "method": "nvmf_create_transport", 00:03:22.408 "params": { 00:03:22.408 "trtype": "TCP", 00:03:22.408 "max_queue_depth": 128, 00:03:22.408 "max_io_qpairs_per_ctrlr": 127, 00:03:22.408 "in_capsule_data_size": 4096, 00:03:22.408 "max_io_size": 131072, 00:03:22.408 "io_unit_size": 131072, 00:03:22.408 "max_aq_depth": 128, 00:03:22.408 "num_shared_buffers": 511, 00:03:22.408 "buf_cache_size": 4294967295, 00:03:22.408 "dif_insert_or_strip": false, 00:03:22.408 "zcopy": false, 00:03:22.408 "c2h_success": true, 00:03:22.408 "sock_priority": 0, 00:03:22.408 "abort_timeout_sec": 1, 00:03:22.408 "ack_timeout": 0, 00:03:22.408 "data_wr_pool_size": 0 00:03:22.408 } 00:03:22.408 } 00:03:22.408 ] 00:03:22.408 }, 00:03:22.408 { 00:03:22.408 "subsystem": "iscsi", 00:03:22.408 "config": [ 00:03:22.408 { 00:03:22.408 "method": "iscsi_set_options", 00:03:22.408 "params": { 00:03:22.408 "node_base": "iqn.2016-06.io.spdk", 00:03:22.408 "max_sessions": 128, 00:03:22.408 "max_connections_per_session": 2, 00:03:22.408 "max_queue_depth": 64, 00:03:22.408 "default_time2wait": 2, 00:03:22.408 "default_time2retain": 20, 00:03:22.408 "first_burst_length": 8192, 00:03:22.408 "immediate_data": true, 00:03:22.408 "allow_duplicated_isid": false, 00:03:22.408 "error_recovery_level": 0, 00:03:22.408 "nop_timeout": 60, 00:03:22.408 "nop_in_interval": 30, 00:03:22.408 "disable_chap": false, 00:03:22.408 "require_chap": false, 00:03:22.408 "mutual_chap": false, 00:03:22.408 "chap_group": 0, 00:03:22.408 "max_large_datain_per_connection": 64, 00:03:22.408 "max_r2t_per_connection": 4, 00:03:22.409 "pdu_pool_size": 36864, 00:03:22.409 "immediate_data_pool_size": 16384, 00:03:22.409 "data_out_pool_size": 2048 00:03:22.409 } 00:03:22.409 } 00:03:22.409 ] 00:03:22.409 } 00:03:22.409 ] 00:03:22.409 } 00:03:22.409 15:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:22.409 15:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3723908 00:03:22.409 15:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3723908 ']' 00:03:22.409 15:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3723908 00:03:22.409 15:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:22.409 15:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:22.409 15:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3723908 00:03:22.409 15:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:22.409 15:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:22.409 15:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3723908' 00:03:22.409 killing process with pid 3723908 00:03:22.409 15:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3723908 00:03:22.409 15:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3723908 00:03:22.668 15:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3724073 00:03:22.668 15:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:22.668 15:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3724073 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3724073 ']' 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3724073 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3724073 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3724073' 00:03:27.946 killing process with pid 3724073 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3724073 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3724073 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:27.946 00:03:27.946 real 0m6.633s 00:03:27.946 user 0m6.527s 00:03:27.946 sys 0m0.555s 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:27.946 ************************************ 00:03:27.946 END TEST skip_rpc_with_json 00:03:27.946 ************************************ 00:03:27.946 15:00:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:27.946 15:00:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.946 15:00:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.946 15:00:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.946 ************************************ 00:03:27.946 START TEST skip_rpc_with_delay 00:03:27.946 ************************************ 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:27.946 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:27.947 [2024-10-01 15:00:37.767222] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:27.947 [2024-10-01 15:00:37.767327] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:27.947 00:03:27.947 real 0m0.089s 00:03:27.947 user 0m0.058s 00:03:27.947 sys 0m0.030s 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:27.947 15:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:27.947 ************************************ 00:03:27.947 END TEST skip_rpc_with_delay 00:03:27.947 ************************************ 00:03:28.207 15:00:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:28.208 15:00:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:28.208 15:00:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:28.208 15:00:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:28.208 15:00:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:28.208 15:00:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.208 ************************************ 00:03:28.208 START TEST exit_on_failed_rpc_init 00:03:28.208 ************************************ 00:03:28.208 15:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:03:28.208 15:00:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3725374 00:03:28.208 15:00:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3725374 00:03:28.208 15:00:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:28.208 15:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3725374 ']' 00:03:28.208 15:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:28.208 15:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:28.208 15:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:28.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:28.208 15:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:28.208 15:00:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:28.208 [2024-10-01 15:00:37.932703] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:28.208 [2024-10-01 15:00:37.932765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725374 ] 00:03:28.208 [2024-10-01 15:00:37.998002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:28.467 [2024-10-01 15:00:38.071826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:29.038 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:29.038 [2024-10-01 15:00:38.781720] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:29.038 [2024-10-01 15:00:38.781772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725383 ] 00:03:29.038 [2024-10-01 15:00:38.858504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:29.299 [2024-10-01 15:00:38.923046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:03:29.299 [2024-10-01 15:00:38.923103] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:29.299 [2024-10-01 15:00:38.923113] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:29.299 [2024-10-01 15:00:38.923119] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:29.299 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:29.299 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:29.299 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:29.299 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:29.299 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:29.299 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:29.299 15:00:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:29.299 15:00:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3725374 00:03:29.299 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3725374 ']' 00:03:29.299 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3725374 00:03:29.299 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:03:29.299 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:29.299 15:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3725374 00:03:29.299 15:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:29.299 15:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:29.299 15:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3725374' 00:03:29.299 killing process with pid 3725374 00:03:29.299 15:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3725374 00:03:29.299 15:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3725374 00:03:29.559 00:03:29.559 real 0m1.412s 00:03:29.559 user 0m1.672s 00:03:29.559 sys 0m0.396s 00:03:29.559 15:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:29.559 15:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:29.559 ************************************ 00:03:29.559 END TEST exit_on_failed_rpc_init 00:03:29.559 ************************************ 00:03:29.559 15:00:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:29.559 00:03:29.559 real 0m13.907s 00:03:29.559 user 0m13.551s 00:03:29.559 sys 0m1.502s 00:03:29.559 15:00:39 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:29.559 15:00:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.559 ************************************ 00:03:29.559 END TEST skip_rpc 00:03:29.559 ************************************ 00:03:29.559 15:00:39 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:29.559 15:00:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:29.559 15:00:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:29.559 15:00:39 -- common/autotest_common.sh@10 -- # set +x 00:03:29.559 ************************************ 00:03:29.559 START TEST rpc_client 00:03:29.559 ************************************ 00:03:29.559 15:00:39 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:29.821 * Looking for test storage... 00:03:29.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:29.821 15:00:39 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:29.821 15:00:39 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:03:29.821 15:00:39 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:29.821 15:00:39 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:29.821 15:00:39 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:29.821 15:00:39 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.821 15:00:39 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:29.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.821 --rc genhtml_branch_coverage=1 00:03:29.821 --rc genhtml_function_coverage=1 00:03:29.821 --rc genhtml_legend=1 00:03:29.821 --rc geninfo_all_blocks=1 00:03:29.821 --rc geninfo_unexecuted_blocks=1 00:03:29.821 00:03:29.821 ' 00:03:29.821 15:00:39 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:29.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.821 --rc genhtml_branch_coverage=1 00:03:29.821 --rc genhtml_function_coverage=1 00:03:29.821 --rc genhtml_legend=1 00:03:29.821 --rc geninfo_all_blocks=1 00:03:29.821 --rc geninfo_unexecuted_blocks=1 00:03:29.821 00:03:29.821 ' 00:03:29.821 15:00:39 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:29.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.821 --rc genhtml_branch_coverage=1 00:03:29.821 --rc genhtml_function_coverage=1 00:03:29.821 --rc genhtml_legend=1 00:03:29.821 --rc geninfo_all_blocks=1 00:03:29.821 --rc geninfo_unexecuted_blocks=1 00:03:29.821 00:03:29.821 ' 00:03:29.821 15:00:39 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:29.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.821 --rc genhtml_branch_coverage=1 00:03:29.821 --rc genhtml_function_coverage=1 00:03:29.821 --rc genhtml_legend=1 00:03:29.821 --rc geninfo_all_blocks=1 00:03:29.821 --rc geninfo_unexecuted_blocks=1 00:03:29.821 00:03:29.821 ' 00:03:29.821 15:00:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:29.821 OK 00:03:29.821 15:00:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:29.821 00:03:29.821 real 0m0.224s 00:03:29.821 user 0m0.128s 00:03:29.821 sys 0m0.110s 00:03:29.821 15:00:39 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:29.821 15:00:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:29.821 ************************************ 00:03:29.821 END TEST rpc_client 00:03:29.821 ************************************ 00:03:29.821 15:00:39 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:29.821 15:00:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:29.821 15:00:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:29.821 15:00:39 -- common/autotest_common.sh@10 -- # set +x 00:03:30.083 ************************************ 00:03:30.083 START TEST json_config 00:03:30.083 ************************************ 00:03:30.083 15:00:39 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:30.083 15:00:39 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:30.083 15:00:39 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:03:30.083 15:00:39 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:30.083 15:00:39 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:30.083 15:00:39 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:30.083 15:00:39 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:30.083 15:00:39 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:30.083 15:00:39 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:30.083 15:00:39 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:30.083 15:00:39 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:30.083 15:00:39 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:30.083 15:00:39 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:30.083 15:00:39 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:30.083 15:00:39 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:30.083 15:00:39 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:30.083 15:00:39 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:30.083 15:00:39 json_config -- scripts/common.sh@345 -- # : 1 00:03:30.083 15:00:39 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:30.083 15:00:39 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:30.083 15:00:39 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:30.083 15:00:39 json_config -- scripts/common.sh@353 -- # local d=1 00:03:30.083 15:00:39 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:30.083 15:00:39 json_config -- scripts/common.sh@355 -- # echo 1 00:03:30.083 15:00:39 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:30.083 15:00:39 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:30.083 15:00:39 json_config -- scripts/common.sh@353 -- # local d=2 00:03:30.083 15:00:39 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:30.083 15:00:39 json_config -- scripts/common.sh@355 -- # echo 2 00:03:30.083 15:00:39 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:30.083 15:00:39 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:30.083 15:00:39 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:30.083 15:00:39 json_config -- scripts/common.sh@368 -- # return 0 00:03:30.083 15:00:39 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:30.083 15:00:39 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:30.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.083 --rc genhtml_branch_coverage=1 00:03:30.083 --rc genhtml_function_coverage=1 00:03:30.083 --rc genhtml_legend=1 00:03:30.083 --rc geninfo_all_blocks=1 00:03:30.083 --rc geninfo_unexecuted_blocks=1 00:03:30.083 00:03:30.083 ' 00:03:30.083 15:00:39 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:30.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.083 --rc genhtml_branch_coverage=1 00:03:30.083 --rc genhtml_function_coverage=1 00:03:30.083 --rc genhtml_legend=1 00:03:30.083 --rc geninfo_all_blocks=1 00:03:30.083 --rc geninfo_unexecuted_blocks=1 00:03:30.083 00:03:30.083 ' 00:03:30.083 15:00:39 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:30.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.083 --rc genhtml_branch_coverage=1 00:03:30.083 --rc genhtml_function_coverage=1 00:03:30.083 --rc genhtml_legend=1 00:03:30.083 --rc geninfo_all_blocks=1 00:03:30.083 --rc geninfo_unexecuted_blocks=1 00:03:30.083 00:03:30.083 ' 00:03:30.083 15:00:39 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:30.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.083 --rc genhtml_branch_coverage=1 00:03:30.083 --rc genhtml_function_coverage=1 00:03:30.083 --rc genhtml_legend=1 00:03:30.083 --rc geninfo_all_blocks=1 00:03:30.083 --rc geninfo_unexecuted_blocks=1 00:03:30.083 00:03:30.083 ' 00:03:30.083 15:00:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:30.083 15:00:39 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:30.083 15:00:39 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:30.083 15:00:39 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:30.083 15:00:39 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:30.083 15:00:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.083 15:00:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.083 15:00:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.083 15:00:39 json_config -- paths/export.sh@5 -- # export PATH 00:03:30.083 15:00:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@51 -- # : 0 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:30.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:30.083 15:00:39 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:30.083 15:00:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:30.083 15:00:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:30.083 15:00:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:30.083 15:00:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:30.083 15:00:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:30.083 15:00:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:30.084 INFO: JSON configuration test init 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:30.084 15:00:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:30.084 15:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:30.084 15:00:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:30.084 15:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:30.084 15:00:39 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:30.084 15:00:39 json_config -- json_config/common.sh@9 -- # local app=target 00:03:30.084 15:00:39 json_config -- json_config/common.sh@10 -- # shift 00:03:30.084 15:00:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:30.084 15:00:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:30.084 15:00:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:30.084 15:00:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:30.084 15:00:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:30.084 15:00:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3725846 00:03:30.084 15:00:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:30.084 Waiting for target to run... 00:03:30.084 15:00:39 json_config -- json_config/common.sh@25 -- # waitforlisten 3725846 /var/tmp/spdk_tgt.sock 00:03:30.084 15:00:39 json_config -- common/autotest_common.sh@831 -- # '[' -z 3725846 ']' 00:03:30.084 15:00:39 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:30.084 15:00:39 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:30.084 15:00:39 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:30.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:30.084 15:00:39 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:30.084 15:00:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:30.084 15:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:30.345 [2024-10-01 15:00:39.978622] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:30.345 [2024-10-01 15:00:39.978688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3725846 ] 00:03:30.606 [2024-10-01 15:00:40.300551] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.606 [2024-10-01 15:00:40.358625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.177 15:00:40 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:31.177 15:00:40 json_config -- common/autotest_common.sh@864 -- # return 0 00:03:31.177 15:00:40 json_config -- json_config/common.sh@26 -- # echo '' 00:03:31.177 00:03:31.177 15:00:40 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:31.177 15:00:40 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:31.177 15:00:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:31.177 15:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:31.177 15:00:40 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:31.177 15:00:40 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:31.177 15:00:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:31.177 15:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:31.177 15:00:40 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:31.177 15:00:40 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:31.177 15:00:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:31.749 15:00:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:31.749 15:00:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:31.749 15:00:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@54 -- # sort 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:31.749 15:00:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:31.749 15:00:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:31.749 15:00:41 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:32.009 15:00:41 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:32.009 15:00:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:32.010 15:00:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:32.010 15:00:41 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:32.010 15:00:41 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:32.010 15:00:41 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:32.010 15:00:41 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:32.010 15:00:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:32.010 MallocForNvmf0 00:03:32.010 15:00:41 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:32.010 15:00:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:32.270 MallocForNvmf1 00:03:32.270 15:00:41 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:32.270 15:00:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:32.270 [2024-10-01 15:00:42.123092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:32.530 15:00:42 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:32.531 15:00:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:32.531 15:00:42 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:32.531 15:00:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:32.791 15:00:42 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:32.791 15:00:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:33.050 15:00:42 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:33.050 15:00:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:33.051 [2024-10-01 15:00:42.841432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:33.051 15:00:42 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:33.051 15:00:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:33.051 15:00:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:33.051 15:00:42 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:33.311 15:00:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:33.311 15:00:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:33.311 15:00:42 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:33.311 15:00:42 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:33.311 15:00:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:33.311 MallocBdevForConfigChangeCheck 00:03:33.311 15:00:43 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:33.311 15:00:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:33.311 15:00:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:33.311 15:00:43 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:33.311 15:00:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:33.881 15:00:43 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:33.881 INFO: shutting down applications... 00:03:33.881 15:00:43 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:33.881 15:00:43 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:33.881 15:00:43 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:33.881 15:00:43 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:34.141 Calling clear_iscsi_subsystem 00:03:34.141 Calling clear_nvmf_subsystem 00:03:34.141 Calling clear_nbd_subsystem 00:03:34.141 Calling clear_ublk_subsystem 00:03:34.141 Calling clear_vhost_blk_subsystem 00:03:34.141 Calling clear_vhost_scsi_subsystem 00:03:34.141 Calling clear_bdev_subsystem 00:03:34.141 15:00:43 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:34.141 15:00:43 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:34.141 15:00:43 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:34.141 15:00:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:34.141 15:00:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:34.141 15:00:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:34.712 15:00:44 json_config -- json_config/json_config.sh@352 -- # break 00:03:34.712 15:00:44 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:34.712 15:00:44 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:34.712 15:00:44 json_config -- json_config/common.sh@31 -- # local app=target 00:03:34.712 15:00:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:34.712 15:00:44 json_config -- json_config/common.sh@35 -- # [[ -n 3725846 ]] 00:03:34.712 15:00:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3725846 00:03:34.712 15:00:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:34.712 15:00:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:34.712 15:00:44 json_config -- json_config/common.sh@41 -- # kill -0 3725846 00:03:34.712 15:00:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:34.973 15:00:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:34.973 15:00:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:34.973 15:00:44 json_config -- json_config/common.sh@41 -- # kill -0 3725846 00:03:34.973 15:00:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:34.973 15:00:44 json_config -- json_config/common.sh@43 -- # break 00:03:34.973 15:00:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:34.973 15:00:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:34.973 SPDK target shutdown done 00:03:34.973 15:00:44 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:34.973 INFO: relaunching applications... 00:03:34.973 15:00:44 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:34.973 15:00:44 json_config -- json_config/common.sh@9 -- # local app=target 00:03:34.973 15:00:44 json_config -- json_config/common.sh@10 -- # shift 00:03:34.973 15:00:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:34.973 15:00:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:34.973 15:00:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:34.973 15:00:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:34.973 15:00:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:34.973 15:00:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3726977 00:03:34.973 15:00:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:34.973 15:00:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:34.973 Waiting for target to run... 00:03:34.973 15:00:44 json_config -- json_config/common.sh@25 -- # waitforlisten 3726977 /var/tmp/spdk_tgt.sock 00:03:34.973 15:00:44 json_config -- common/autotest_common.sh@831 -- # '[' -z 3726977 ']' 00:03:34.973 15:00:44 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:34.973 15:00:44 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:34.973 15:00:44 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:34.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:34.973 15:00:44 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:34.973 15:00:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:35.233 [2024-10-01 15:00:44.871332] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:35.233 [2024-10-01 15:00:44.871394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3726977 ] 00:03:35.494 [2024-10-01 15:00:45.214113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.494 [2024-10-01 15:00:45.265642] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.065 [2024-10-01 15:00:45.786206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:36.065 [2024-10-01 15:00:45.818572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:36.065 15:00:45 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:36.065 15:00:45 json_config -- common/autotest_common.sh@864 -- # return 0 00:03:36.065 15:00:45 json_config -- json_config/common.sh@26 -- # echo '' 00:03:36.065 00:03:36.065 15:00:45 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:36.065 15:00:45 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:36.065 INFO: Checking if target configuration is the same... 00:03:36.065 15:00:45 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:36.065 15:00:45 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:36.065 15:00:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:36.065 + '[' 2 -ne 2 ']' 00:03:36.065 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:36.065 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:36.065 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:36.065 +++ basename /dev/fd/62 00:03:36.065 ++ mktemp /tmp/62.XXX 00:03:36.065 + tmp_file_1=/tmp/62.ZH3 00:03:36.065 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:36.065 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:36.065 + tmp_file_2=/tmp/spdk_tgt_config.json.2L8 00:03:36.065 + ret=0 00:03:36.065 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:36.325 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:36.586 + diff -u /tmp/62.ZH3 /tmp/spdk_tgt_config.json.2L8 00:03:36.586 + echo 'INFO: JSON config files are the same' 00:03:36.586 INFO: JSON config files are the same 00:03:36.586 + rm /tmp/62.ZH3 /tmp/spdk_tgt_config.json.2L8 00:03:36.586 + exit 0 00:03:36.586 15:00:46 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:36.586 15:00:46 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:36.586 INFO: changing configuration and checking if this can be detected... 00:03:36.586 15:00:46 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:36.586 15:00:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:36.586 15:00:46 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:36.586 15:00:46 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:36.586 15:00:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:36.586 + '[' 2 -ne 2 ']' 00:03:36.586 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:36.586 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:36.586 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:36.586 +++ basename /dev/fd/62 00:03:36.586 ++ mktemp /tmp/62.XXX 00:03:36.586 + tmp_file_1=/tmp/62.aab 00:03:36.586 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:36.586 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:36.586 + tmp_file_2=/tmp/spdk_tgt_config.json.WDx 00:03:36.586 + ret=0 00:03:36.586 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:37.156 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:37.156 + diff -u /tmp/62.aab /tmp/spdk_tgt_config.json.WDx 00:03:37.156 + ret=1 00:03:37.156 + echo '=== Start of file: /tmp/62.aab ===' 00:03:37.156 + cat /tmp/62.aab 00:03:37.156 + echo '=== End of file: /tmp/62.aab ===' 00:03:37.156 + echo '' 00:03:37.156 + echo '=== Start of file: /tmp/spdk_tgt_config.json.WDx ===' 00:03:37.156 + cat /tmp/spdk_tgt_config.json.WDx 00:03:37.156 + echo '=== End of file: /tmp/spdk_tgt_config.json.WDx ===' 00:03:37.156 + echo '' 00:03:37.156 + rm /tmp/62.aab /tmp/spdk_tgt_config.json.WDx 00:03:37.156 + exit 1 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:37.156 INFO: configuration change detected. 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@324 -- # [[ -n 3726977 ]] 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:37.156 15:00:46 json_config -- json_config/json_config.sh@330 -- # killprocess 3726977 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@950 -- # '[' -z 3726977 ']' 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@954 -- # kill -0 3726977 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@955 -- # uname 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3726977 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3726977' 00:03:37.156 killing process with pid 3726977 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@969 -- # kill 3726977 00:03:37.156 15:00:46 json_config -- common/autotest_common.sh@974 -- # wait 3726977 00:03:37.417 15:00:47 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:37.417 15:00:47 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:37.417 15:00:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:37.417 15:00:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:37.417 15:00:47 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:37.417 15:00:47 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:37.417 INFO: Success 00:03:37.417 00:03:37.417 real 0m7.545s 00:03:37.417 user 0m9.098s 00:03:37.417 sys 0m2.006s 00:03:37.417 15:00:47 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:37.417 15:00:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:37.417 ************************************ 00:03:37.417 END TEST json_config 00:03:37.417 ************************************ 00:03:37.678 15:00:47 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:37.678 15:00:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:37.678 15:00:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:37.678 15:00:47 -- common/autotest_common.sh@10 -- # set +x 00:03:37.678 ************************************ 00:03:37.678 START TEST json_config_extra_key 00:03:37.678 ************************************ 00:03:37.678 15:00:47 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:37.678 15:00:47 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:37.678 15:00:47 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:03:37.678 15:00:47 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:37.678 15:00:47 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:37.678 15:00:47 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.678 15:00:47 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.678 15:00:47 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.678 15:00:47 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.678 15:00:47 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.678 15:00:47 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.678 15:00:47 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.678 15:00:47 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.678 15:00:47 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.678 15:00:47 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:37.679 15:00:47 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.679 15:00:47 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:37.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.679 --rc genhtml_branch_coverage=1 00:03:37.679 --rc genhtml_function_coverage=1 00:03:37.679 --rc genhtml_legend=1 00:03:37.679 --rc geninfo_all_blocks=1 00:03:37.679 --rc geninfo_unexecuted_blocks=1 00:03:37.679 00:03:37.679 ' 00:03:37.679 15:00:47 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:37.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.679 --rc genhtml_branch_coverage=1 00:03:37.679 --rc genhtml_function_coverage=1 00:03:37.679 --rc genhtml_legend=1 00:03:37.679 --rc geninfo_all_blocks=1 00:03:37.679 --rc geninfo_unexecuted_blocks=1 00:03:37.679 00:03:37.679 ' 00:03:37.679 15:00:47 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:37.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.679 --rc genhtml_branch_coverage=1 00:03:37.679 --rc genhtml_function_coverage=1 00:03:37.679 --rc genhtml_legend=1 00:03:37.679 --rc geninfo_all_blocks=1 00:03:37.679 --rc geninfo_unexecuted_blocks=1 00:03:37.679 00:03:37.679 ' 00:03:37.679 15:00:47 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:37.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.679 --rc genhtml_branch_coverage=1 00:03:37.679 --rc genhtml_function_coverage=1 00:03:37.679 --rc genhtml_legend=1 00:03:37.679 --rc geninfo_all_blocks=1 00:03:37.679 --rc geninfo_unexecuted_blocks=1 00:03:37.679 00:03:37.679 ' 00:03:37.679 15:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:37.679 15:00:47 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:37.679 15:00:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.679 15:00:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.679 15:00:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.679 15:00:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:37.679 15:00:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:37.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:37.679 15:00:47 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:37.679 15:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:37.679 15:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:37.679 15:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:37.679 15:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:37.680 15:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:37.680 15:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:37.680 15:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:37.680 15:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:37.680 15:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:37.680 15:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:37.680 15:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:37.680 INFO: launching applications... 00:03:37.680 15:00:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:37.680 15:00:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:37.680 15:00:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:37.680 15:00:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:37.680 15:00:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:37.680 15:00:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:37.680 15:00:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:37.680 15:00:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:37.680 15:00:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3727623 00:03:37.680 15:00:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:37.680 Waiting for target to run... 00:03:37.680 15:00:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3727623 /var/tmp/spdk_tgt.sock 00:03:37.680 15:00:47 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3727623 ']' 00:03:37.680 15:00:47 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:37.680 15:00:47 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:37.680 15:00:47 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:37.680 15:00:47 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:37.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:37.680 15:00:47 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:37.680 15:00:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:37.940 [2024-10-01 15:00:47.562494] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:37.940 [2024-10-01 15:00:47.562567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727623 ] 00:03:38.200 [2024-10-01 15:00:47.817458] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.200 [2024-10-01 15:00:47.867356] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.770 15:00:48 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:38.770 15:00:48 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:03:38.770 15:00:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:38.770 00:03:38.770 15:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:38.770 INFO: shutting down applications... 00:03:38.770 15:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:38.770 15:00:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:38.770 15:00:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:38.770 15:00:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3727623 ]] 00:03:38.770 15:00:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3727623 00:03:38.770 15:00:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:38.770 15:00:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:38.770 15:00:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3727623 00:03:38.770 15:00:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:39.030 15:00:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:39.030 15:00:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:39.030 15:00:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3727623 00:03:39.030 15:00:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:39.030 15:00:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:39.030 15:00:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:39.030 15:00:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:39.030 SPDK target shutdown done 00:03:39.030 15:00:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:39.030 Success 00:03:39.030 00:03:39.030 real 0m1.546s 00:03:39.030 user 0m1.213s 00:03:39.030 sys 0m0.389s 00:03:39.030 15:00:48 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:39.030 15:00:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:39.030 ************************************ 00:03:39.030 END TEST json_config_extra_key 00:03:39.030 ************************************ 00:03:39.291 15:00:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:39.291 15:00:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.291 15:00:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.291 15:00:48 -- common/autotest_common.sh@10 -- # set +x 00:03:39.291 ************************************ 00:03:39.291 START TEST alias_rpc 00:03:39.291 ************************************ 00:03:39.291 15:00:48 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:39.291 * Looking for test storage... 00:03:39.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:39.291 15:00:49 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:39.291 15:00:49 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:39.291 15:00:49 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:03:39.292 15:00:49 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:39.292 15:00:49 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:39.292 15:00:49 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:39.292 15:00:49 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:39.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.292 --rc genhtml_branch_coverage=1 00:03:39.292 --rc genhtml_function_coverage=1 00:03:39.292 --rc genhtml_legend=1 00:03:39.292 --rc geninfo_all_blocks=1 00:03:39.292 --rc geninfo_unexecuted_blocks=1 00:03:39.292 00:03:39.292 ' 00:03:39.292 15:00:49 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:39.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.292 --rc genhtml_branch_coverage=1 00:03:39.292 --rc genhtml_function_coverage=1 00:03:39.292 --rc genhtml_legend=1 00:03:39.292 --rc geninfo_all_blocks=1 00:03:39.292 --rc geninfo_unexecuted_blocks=1 00:03:39.292 00:03:39.292 ' 00:03:39.292 15:00:49 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:39.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.292 --rc genhtml_branch_coverage=1 00:03:39.292 --rc genhtml_function_coverage=1 00:03:39.292 --rc genhtml_legend=1 00:03:39.292 --rc geninfo_all_blocks=1 00:03:39.292 --rc geninfo_unexecuted_blocks=1 00:03:39.292 00:03:39.292 ' 00:03:39.292 15:00:49 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:39.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:39.292 --rc genhtml_branch_coverage=1 00:03:39.292 --rc genhtml_function_coverage=1 00:03:39.292 --rc genhtml_legend=1 00:03:39.292 --rc geninfo_all_blocks=1 00:03:39.292 --rc geninfo_unexecuted_blocks=1 00:03:39.292 00:03:39.292 ' 00:03:39.292 15:00:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:39.292 15:00:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3727982 00:03:39.292 15:00:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3727982 00:03:39.292 15:00:49 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3727982 ']' 00:03:39.292 15:00:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:39.292 15:00:49 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:39.292 15:00:49 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:39.292 15:00:49 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:39.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:39.292 15:00:49 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:39.292 15:00:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.552 [2024-10-01 15:00:49.198712] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:39.553 [2024-10-01 15:00:49.198788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727982 ] 00:03:39.553 [2024-10-01 15:00:49.264243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.553 [2024-10-01 15:00:49.339243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.122 15:00:49 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:40.122 15:00:49 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:03:40.122 15:00:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:40.382 15:00:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3727982 00:03:40.382 15:00:50 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3727982 ']' 00:03:40.383 15:00:50 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3727982 00:03:40.383 15:00:50 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:03:40.383 15:00:50 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:40.383 15:00:50 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3727982 00:03:40.383 15:00:50 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:40.383 15:00:50 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:40.383 15:00:50 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3727982' 00:03:40.383 killing process with pid 3727982 00:03:40.383 15:00:50 alias_rpc -- common/autotest_common.sh@969 -- # kill 3727982 00:03:40.383 15:00:50 alias_rpc -- common/autotest_common.sh@974 -- # wait 3727982 00:03:40.643 00:03:40.643 real 0m1.525s 00:03:40.643 user 0m1.651s 00:03:40.643 sys 0m0.417s 00:03:40.643 15:00:50 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.643 15:00:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.643 ************************************ 00:03:40.643 END TEST alias_rpc 00:03:40.643 ************************************ 00:03:40.643 15:00:50 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:40.643 15:00:50 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:40.643 15:00:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.643 15:00:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.643 15:00:50 -- common/autotest_common.sh@10 -- # set +x 00:03:40.904 ************************************ 00:03:40.904 START TEST spdkcli_tcp 00:03:40.904 ************************************ 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:40.904 * Looking for test storage... 00:03:40.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.904 15:00:50 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:40.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.904 --rc genhtml_branch_coverage=1 00:03:40.904 --rc genhtml_function_coverage=1 00:03:40.904 --rc genhtml_legend=1 00:03:40.904 --rc geninfo_all_blocks=1 00:03:40.904 --rc geninfo_unexecuted_blocks=1 00:03:40.904 00:03:40.904 ' 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:40.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.904 --rc genhtml_branch_coverage=1 00:03:40.904 --rc genhtml_function_coverage=1 00:03:40.904 --rc genhtml_legend=1 00:03:40.904 --rc geninfo_all_blocks=1 00:03:40.904 --rc geninfo_unexecuted_blocks=1 00:03:40.904 00:03:40.904 ' 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:40.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.904 --rc genhtml_branch_coverage=1 00:03:40.904 --rc genhtml_function_coverage=1 00:03:40.904 --rc genhtml_legend=1 00:03:40.904 --rc geninfo_all_blocks=1 00:03:40.904 --rc geninfo_unexecuted_blocks=1 00:03:40.904 00:03:40.904 ' 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:40.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.904 --rc genhtml_branch_coverage=1 00:03:40.904 --rc genhtml_function_coverage=1 00:03:40.904 --rc genhtml_legend=1 00:03:40.904 --rc geninfo_all_blocks=1 00:03:40.904 --rc geninfo_unexecuted_blocks=1 00:03:40.904 00:03:40.904 ' 00:03:40.904 15:00:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:40.904 15:00:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:40.904 15:00:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:40.904 15:00:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:40.904 15:00:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:40.904 15:00:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:40.904 15:00:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:40.904 15:00:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3728334 00:03:40.904 15:00:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3728334 00:03:40.904 15:00:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3728334 ']' 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:40.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:40.904 15:00:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:41.165 [2024-10-01 15:00:50.804573] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:41.165 [2024-10-01 15:00:50.804645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728334 ] 00:03:41.165 [2024-10-01 15:00:50.872664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:41.165 [2024-10-01 15:00:50.949389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:03:41.165 [2024-10-01 15:00:50.949391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.735 15:00:51 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:41.735 15:00:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:03:41.735 15:00:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3728574 00:03:41.735 15:00:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:41.735 15:00:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:41.996 [ 00:03:41.996 "bdev_malloc_delete", 00:03:41.996 "bdev_malloc_create", 00:03:41.996 "bdev_null_resize", 00:03:41.996 "bdev_null_delete", 00:03:41.996 "bdev_null_create", 00:03:41.996 "bdev_nvme_cuse_unregister", 00:03:41.996 "bdev_nvme_cuse_register", 00:03:41.996 "bdev_opal_new_user", 00:03:41.996 "bdev_opal_set_lock_state", 00:03:41.996 "bdev_opal_delete", 00:03:41.996 "bdev_opal_get_info", 00:03:41.996 "bdev_opal_create", 00:03:41.996 "bdev_nvme_opal_revert", 00:03:41.996 "bdev_nvme_opal_init", 00:03:41.996 "bdev_nvme_send_cmd", 00:03:41.996 "bdev_nvme_set_keys", 00:03:41.996 "bdev_nvme_get_path_iostat", 00:03:41.996 "bdev_nvme_get_mdns_discovery_info", 00:03:41.996 "bdev_nvme_stop_mdns_discovery", 00:03:41.996 "bdev_nvme_start_mdns_discovery", 00:03:41.996 "bdev_nvme_set_multipath_policy", 00:03:41.996 "bdev_nvme_set_preferred_path", 00:03:41.996 "bdev_nvme_get_io_paths", 00:03:41.996 "bdev_nvme_remove_error_injection", 00:03:41.996 "bdev_nvme_add_error_injection", 00:03:41.996 "bdev_nvme_get_discovery_info", 00:03:41.996 "bdev_nvme_stop_discovery", 00:03:41.996 "bdev_nvme_start_discovery", 00:03:41.996 "bdev_nvme_get_controller_health_info", 00:03:41.996 "bdev_nvme_disable_controller", 00:03:41.996 "bdev_nvme_enable_controller", 00:03:41.996 "bdev_nvme_reset_controller", 00:03:41.996 "bdev_nvme_get_transport_statistics", 00:03:41.996 "bdev_nvme_apply_firmware", 00:03:41.996 "bdev_nvme_detach_controller", 00:03:41.996 "bdev_nvme_get_controllers", 00:03:41.996 "bdev_nvme_attach_controller", 00:03:41.996 "bdev_nvme_set_hotplug", 00:03:41.996 "bdev_nvme_set_options", 00:03:41.996 "bdev_passthru_delete", 00:03:41.996 "bdev_passthru_create", 00:03:41.996 "bdev_lvol_set_parent_bdev", 00:03:41.996 "bdev_lvol_set_parent", 00:03:41.996 "bdev_lvol_check_shallow_copy", 00:03:41.996 "bdev_lvol_start_shallow_copy", 00:03:41.996 "bdev_lvol_grow_lvstore", 00:03:41.996 "bdev_lvol_get_lvols", 00:03:41.996 "bdev_lvol_get_lvstores", 00:03:41.996 "bdev_lvol_delete", 00:03:41.996 "bdev_lvol_set_read_only", 00:03:41.996 "bdev_lvol_resize", 00:03:41.996 "bdev_lvol_decouple_parent", 00:03:41.996 "bdev_lvol_inflate", 00:03:41.996 "bdev_lvol_rename", 00:03:41.996 "bdev_lvol_clone_bdev", 00:03:41.996 "bdev_lvol_clone", 00:03:41.996 "bdev_lvol_snapshot", 00:03:41.996 "bdev_lvol_create", 00:03:41.996 "bdev_lvol_delete_lvstore", 00:03:41.996 "bdev_lvol_rename_lvstore", 00:03:41.996 "bdev_lvol_create_lvstore", 00:03:41.996 "bdev_raid_set_options", 00:03:41.996 "bdev_raid_remove_base_bdev", 00:03:41.996 "bdev_raid_add_base_bdev", 00:03:41.996 "bdev_raid_delete", 00:03:41.996 "bdev_raid_create", 00:03:41.996 "bdev_raid_get_bdevs", 00:03:41.996 "bdev_error_inject_error", 00:03:41.996 "bdev_error_delete", 00:03:41.996 "bdev_error_create", 00:03:41.996 "bdev_split_delete", 00:03:41.996 "bdev_split_create", 00:03:41.996 "bdev_delay_delete", 00:03:41.996 "bdev_delay_create", 00:03:41.997 "bdev_delay_update_latency", 00:03:41.997 "bdev_zone_block_delete", 00:03:41.997 "bdev_zone_block_create", 00:03:41.997 "blobfs_create", 00:03:41.997 "blobfs_detect", 00:03:41.997 "blobfs_set_cache_size", 00:03:41.997 "bdev_aio_delete", 00:03:41.997 "bdev_aio_rescan", 00:03:41.997 "bdev_aio_create", 00:03:41.997 "bdev_ftl_set_property", 00:03:41.997 "bdev_ftl_get_properties", 00:03:41.997 "bdev_ftl_get_stats", 00:03:41.997 "bdev_ftl_unmap", 00:03:41.997 "bdev_ftl_unload", 00:03:41.997 "bdev_ftl_delete", 00:03:41.997 "bdev_ftl_load", 00:03:41.997 "bdev_ftl_create", 00:03:41.997 "bdev_virtio_attach_controller", 00:03:41.997 "bdev_virtio_scsi_get_devices", 00:03:41.997 "bdev_virtio_detach_controller", 00:03:41.997 "bdev_virtio_blk_set_hotplug", 00:03:41.997 "bdev_iscsi_delete", 00:03:41.997 "bdev_iscsi_create", 00:03:41.997 "bdev_iscsi_set_options", 00:03:41.997 "accel_error_inject_error", 00:03:41.997 "ioat_scan_accel_module", 00:03:41.997 "dsa_scan_accel_module", 00:03:41.997 "iaa_scan_accel_module", 00:03:41.997 "vfu_virtio_create_fs_endpoint", 00:03:41.997 "vfu_virtio_create_scsi_endpoint", 00:03:41.997 "vfu_virtio_scsi_remove_target", 00:03:41.997 "vfu_virtio_scsi_add_target", 00:03:41.997 "vfu_virtio_create_blk_endpoint", 00:03:41.997 "vfu_virtio_delete_endpoint", 00:03:41.997 "keyring_file_remove_key", 00:03:41.997 "keyring_file_add_key", 00:03:41.997 "keyring_linux_set_options", 00:03:41.997 "fsdev_aio_delete", 00:03:41.997 "fsdev_aio_create", 00:03:41.997 "iscsi_get_histogram", 00:03:41.997 "iscsi_enable_histogram", 00:03:41.997 "iscsi_set_options", 00:03:41.997 "iscsi_get_auth_groups", 00:03:41.997 "iscsi_auth_group_remove_secret", 00:03:41.997 "iscsi_auth_group_add_secret", 00:03:41.997 "iscsi_delete_auth_group", 00:03:41.997 "iscsi_create_auth_group", 00:03:41.997 "iscsi_set_discovery_auth", 00:03:41.997 "iscsi_get_options", 00:03:41.997 "iscsi_target_node_request_logout", 00:03:41.997 "iscsi_target_node_set_redirect", 00:03:41.997 "iscsi_target_node_set_auth", 00:03:41.997 "iscsi_target_node_add_lun", 00:03:41.997 "iscsi_get_stats", 00:03:41.997 "iscsi_get_connections", 00:03:41.997 "iscsi_portal_group_set_auth", 00:03:41.997 "iscsi_start_portal_group", 00:03:41.997 "iscsi_delete_portal_group", 00:03:41.997 "iscsi_create_portal_group", 00:03:41.997 "iscsi_get_portal_groups", 00:03:41.997 "iscsi_delete_target_node", 00:03:41.997 "iscsi_target_node_remove_pg_ig_maps", 00:03:41.997 "iscsi_target_node_add_pg_ig_maps", 00:03:41.997 "iscsi_create_target_node", 00:03:41.997 "iscsi_get_target_nodes", 00:03:41.997 "iscsi_delete_initiator_group", 00:03:41.997 "iscsi_initiator_group_remove_initiators", 00:03:41.997 "iscsi_initiator_group_add_initiators", 00:03:41.997 "iscsi_create_initiator_group", 00:03:41.997 "iscsi_get_initiator_groups", 00:03:41.997 "nvmf_set_crdt", 00:03:41.997 "nvmf_set_config", 00:03:41.997 "nvmf_set_max_subsystems", 00:03:41.997 "nvmf_stop_mdns_prr", 00:03:41.997 "nvmf_publish_mdns_prr", 00:03:41.997 "nvmf_subsystem_get_listeners", 00:03:41.997 "nvmf_subsystem_get_qpairs", 00:03:41.997 "nvmf_subsystem_get_controllers", 00:03:41.997 "nvmf_get_stats", 00:03:41.997 "nvmf_get_transports", 00:03:41.997 "nvmf_create_transport", 00:03:41.997 "nvmf_get_targets", 00:03:41.997 "nvmf_delete_target", 00:03:41.997 "nvmf_create_target", 00:03:41.997 "nvmf_subsystem_allow_any_host", 00:03:41.997 "nvmf_subsystem_set_keys", 00:03:41.997 "nvmf_subsystem_remove_host", 00:03:41.997 "nvmf_subsystem_add_host", 00:03:41.997 "nvmf_ns_remove_host", 00:03:41.997 "nvmf_ns_add_host", 00:03:41.997 "nvmf_subsystem_remove_ns", 00:03:41.997 "nvmf_subsystem_set_ns_ana_group", 00:03:41.997 "nvmf_subsystem_add_ns", 00:03:41.997 "nvmf_subsystem_listener_set_ana_state", 00:03:41.997 "nvmf_discovery_get_referrals", 00:03:41.997 "nvmf_discovery_remove_referral", 00:03:41.997 "nvmf_discovery_add_referral", 00:03:41.997 "nvmf_subsystem_remove_listener", 00:03:41.997 "nvmf_subsystem_add_listener", 00:03:41.997 "nvmf_delete_subsystem", 00:03:41.997 "nvmf_create_subsystem", 00:03:41.997 "nvmf_get_subsystems", 00:03:41.997 "env_dpdk_get_mem_stats", 00:03:41.997 "nbd_get_disks", 00:03:41.997 "nbd_stop_disk", 00:03:41.997 "nbd_start_disk", 00:03:41.997 "ublk_recover_disk", 00:03:41.997 "ublk_get_disks", 00:03:41.997 "ublk_stop_disk", 00:03:41.997 "ublk_start_disk", 00:03:41.997 "ublk_destroy_target", 00:03:41.997 "ublk_create_target", 00:03:41.997 "virtio_blk_create_transport", 00:03:41.997 "virtio_blk_get_transports", 00:03:41.997 "vhost_controller_set_coalescing", 00:03:41.997 "vhost_get_controllers", 00:03:41.997 "vhost_delete_controller", 00:03:41.997 "vhost_create_blk_controller", 00:03:41.997 "vhost_scsi_controller_remove_target", 00:03:41.997 "vhost_scsi_controller_add_target", 00:03:41.997 "vhost_start_scsi_controller", 00:03:41.997 "vhost_create_scsi_controller", 00:03:41.997 "thread_set_cpumask", 00:03:41.997 "scheduler_set_options", 00:03:41.997 "framework_get_governor", 00:03:41.997 "framework_get_scheduler", 00:03:41.997 "framework_set_scheduler", 00:03:41.997 "framework_get_reactors", 00:03:41.997 "thread_get_io_channels", 00:03:41.997 "thread_get_pollers", 00:03:41.997 "thread_get_stats", 00:03:41.997 "framework_monitor_context_switch", 00:03:41.997 "spdk_kill_instance", 00:03:41.997 "log_enable_timestamps", 00:03:41.997 "log_get_flags", 00:03:41.997 "log_clear_flag", 00:03:41.997 "log_set_flag", 00:03:41.997 "log_get_level", 00:03:41.997 "log_set_level", 00:03:41.997 "log_get_print_level", 00:03:41.997 "log_set_print_level", 00:03:41.997 "framework_enable_cpumask_locks", 00:03:41.997 "framework_disable_cpumask_locks", 00:03:41.997 "framework_wait_init", 00:03:41.997 "framework_start_init", 00:03:41.997 "scsi_get_devices", 00:03:41.997 "bdev_get_histogram", 00:03:41.997 "bdev_enable_histogram", 00:03:41.997 "bdev_set_qos_limit", 00:03:41.997 "bdev_set_qd_sampling_period", 00:03:41.997 "bdev_get_bdevs", 00:03:41.997 "bdev_reset_iostat", 00:03:41.998 "bdev_get_iostat", 00:03:41.998 "bdev_examine", 00:03:41.998 "bdev_wait_for_examine", 00:03:41.998 "bdev_set_options", 00:03:41.998 "accel_get_stats", 00:03:41.998 "accel_set_options", 00:03:41.998 "accel_set_driver", 00:03:41.998 "accel_crypto_key_destroy", 00:03:41.998 "accel_crypto_keys_get", 00:03:41.998 "accel_crypto_key_create", 00:03:41.998 "accel_assign_opc", 00:03:41.998 "accel_get_module_info", 00:03:41.998 "accel_get_opc_assignments", 00:03:41.998 "vmd_rescan", 00:03:41.998 "vmd_remove_device", 00:03:41.998 "vmd_enable", 00:03:41.998 "sock_get_default_impl", 00:03:41.998 "sock_set_default_impl", 00:03:41.998 "sock_impl_set_options", 00:03:41.998 "sock_impl_get_options", 00:03:41.998 "iobuf_get_stats", 00:03:41.998 "iobuf_set_options", 00:03:41.998 "keyring_get_keys", 00:03:41.998 "vfu_tgt_set_base_path", 00:03:41.998 "framework_get_pci_devices", 00:03:41.998 "framework_get_config", 00:03:41.998 "framework_get_subsystems", 00:03:41.998 "fsdev_set_opts", 00:03:41.998 "fsdev_get_opts", 00:03:41.998 "trace_get_info", 00:03:41.998 "trace_get_tpoint_group_mask", 00:03:41.998 "trace_disable_tpoint_group", 00:03:41.998 "trace_enable_tpoint_group", 00:03:41.998 "trace_clear_tpoint_mask", 00:03:41.998 "trace_set_tpoint_mask", 00:03:41.998 "notify_get_notifications", 00:03:41.998 "notify_get_types", 00:03:41.998 "spdk_get_version", 00:03:41.998 "rpc_get_methods" 00:03:41.998 ] 00:03:41.998 15:00:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:41.998 15:00:51 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:41.998 15:00:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:41.998 15:00:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:41.998 15:00:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3728334 00:03:41.998 15:00:51 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3728334 ']' 00:03:41.998 15:00:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3728334 00:03:41.998 15:00:51 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:03:41.998 15:00:51 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:41.998 15:00:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3728334 00:03:42.258 15:00:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:42.258 15:00:51 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:42.258 15:00:51 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3728334' 00:03:42.258 killing process with pid 3728334 00:03:42.258 15:00:51 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3728334 00:03:42.258 15:00:51 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3728334 00:03:42.258 00:03:42.258 real 0m1.582s 00:03:42.258 user 0m2.814s 00:03:42.258 sys 0m0.477s 00:03:42.258 15:00:52 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.258 15:00:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:42.258 ************************************ 00:03:42.258 END TEST spdkcli_tcp 00:03:42.258 ************************************ 00:03:42.519 15:00:52 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:42.519 15:00:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.519 15:00:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.519 15:00:52 -- common/autotest_common.sh@10 -- # set +x 00:03:42.519 ************************************ 00:03:42.519 START TEST dpdk_mem_utility 00:03:42.519 ************************************ 00:03:42.519 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:42.519 * Looking for test storage... 00:03:42.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:42.519 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:42.519 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:03:42.519 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:42.519 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.519 15:00:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:03:42.779 15:00:52 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.779 15:00:52 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.779 15:00:52 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.779 15:00:52 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:03:42.779 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.779 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:42.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.779 --rc genhtml_branch_coverage=1 00:03:42.779 --rc genhtml_function_coverage=1 00:03:42.779 --rc genhtml_legend=1 00:03:42.779 --rc geninfo_all_blocks=1 00:03:42.779 --rc geninfo_unexecuted_blocks=1 00:03:42.779 00:03:42.779 ' 00:03:42.780 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:42.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.780 --rc genhtml_branch_coverage=1 00:03:42.780 --rc genhtml_function_coverage=1 00:03:42.780 --rc genhtml_legend=1 00:03:42.780 --rc geninfo_all_blocks=1 00:03:42.780 --rc geninfo_unexecuted_blocks=1 00:03:42.780 00:03:42.780 ' 00:03:42.780 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:42.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.780 --rc genhtml_branch_coverage=1 00:03:42.780 --rc genhtml_function_coverage=1 00:03:42.780 --rc genhtml_legend=1 00:03:42.780 --rc geninfo_all_blocks=1 00:03:42.780 --rc geninfo_unexecuted_blocks=1 00:03:42.780 00:03:42.780 ' 00:03:42.780 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:42.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.780 --rc genhtml_branch_coverage=1 00:03:42.780 --rc genhtml_function_coverage=1 00:03:42.780 --rc genhtml_legend=1 00:03:42.780 --rc geninfo_all_blocks=1 00:03:42.780 --rc geninfo_unexecuted_blocks=1 00:03:42.780 00:03:42.780 ' 00:03:42.780 15:00:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:42.780 15:00:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3728707 00:03:42.780 15:00:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3728707 00:03:42.780 15:00:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.780 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3728707 ']' 00:03:42.780 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.780 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:42.780 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.780 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:42.780 15:00:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:42.780 [2024-10-01 15:00:52.443770] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:42.780 [2024-10-01 15:00:52.443843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728707 ] 00:03:42.780 [2024-10-01 15:00:52.508541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.780 [2024-10-01 15:00:52.583021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.722 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:43.722 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:03:43.722 15:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:43.722 15:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:43.722 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.722 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:43.722 { 00:03:43.722 "filename": "/tmp/spdk_mem_dump.txt" 00:03:43.722 } 00:03:43.722 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.722 15:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:43.722 DPDK memory size 860.000000 MiB in 1 heap(s) 00:03:43.722 1 heaps totaling size 860.000000 MiB 00:03:43.722 size: 860.000000 MiB heap id: 0 00:03:43.722 end heaps---------- 00:03:43.722 9 mempools totaling size 642.649841 MiB 00:03:43.722 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:43.722 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:43.722 size: 92.545471 MiB name: bdev_io_3728707 00:03:43.722 size: 51.011292 MiB name: evtpool_3728707 00:03:43.722 size: 50.003479 MiB name: msgpool_3728707 00:03:43.722 size: 36.509338 MiB name: fsdev_io_3728707 00:03:43.722 size: 21.763794 MiB name: PDU_Pool 00:03:43.722 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:43.722 size: 0.026123 MiB name: Session_Pool 00:03:43.722 end mempools------- 00:03:43.722 6 memzones totaling size 4.142822 MiB 00:03:43.722 size: 1.000366 MiB name: RG_ring_0_3728707 00:03:43.722 size: 1.000366 MiB name: RG_ring_1_3728707 00:03:43.722 size: 1.000366 MiB name: RG_ring_4_3728707 00:03:43.722 size: 1.000366 MiB name: RG_ring_5_3728707 00:03:43.722 size: 0.125366 MiB name: RG_ring_2_3728707 00:03:43.722 size: 0.015991 MiB name: RG_ring_3_3728707 00:03:43.722 end memzones------- 00:03:43.723 15:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:43.723 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:03:43.723 list of free elements. size: 13.984680 MiB 00:03:43.723 element at address: 0x200000400000 with size: 1.999512 MiB 00:03:43.723 element at address: 0x200000800000 with size: 1.996948 MiB 00:03:43.723 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:03:43.723 element at address: 0x20001be00000 with size: 0.999878 MiB 00:03:43.723 element at address: 0x200034a00000 with size: 0.994446 MiB 00:03:43.723 element at address: 0x200009600000 with size: 0.959839 MiB 00:03:43.723 element at address: 0x200015e00000 with size: 0.954285 MiB 00:03:43.723 element at address: 0x20001c000000 with size: 0.936584 MiB 00:03:43.723 element at address: 0x200000200000 with size: 0.841614 MiB 00:03:43.723 element at address: 0x20001d800000 with size: 0.582886 MiB 00:03:43.723 element at address: 0x200003e00000 with size: 0.495605 MiB 00:03:43.723 element at address: 0x20000d800000 with size: 0.490723 MiB 00:03:43.723 element at address: 0x20001c200000 with size: 0.485657 MiB 00:03:43.723 element at address: 0x200007000000 with size: 0.481934 MiB 00:03:43.723 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:03:43.723 element at address: 0x200003a00000 with size: 0.354858 MiB 00:03:43.723 list of standard malloc elements. size: 199.218628 MiB 00:03:43.723 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:03:43.723 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:03:43.723 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:03:43.723 element at address: 0x20001befff80 with size: 1.000122 MiB 00:03:43.723 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:03:43.723 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:43.723 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:03:43.723 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:43.723 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:03:43.723 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:03:43.723 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:03:43.723 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:03:43.723 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:03:43.723 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:03:43.723 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:43.723 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:43.723 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:03:43.723 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:03:43.723 element at address: 0x200003a5f240 with size: 0.000183 MiB 00:03:43.723 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:03:43.723 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:03:43.723 element at address: 0x200003aff880 with size: 0.000183 MiB 00:03:43.723 element at address: 0x200003affa80 with size: 0.000183 MiB 00:03:43.723 element at address: 0x200003affb40 with size: 0.000183 MiB 00:03:43.723 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:03:43.723 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20000707b600 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:03:43.723 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:03:43.723 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:03:43.723 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20001d895380 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20001d895440 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:03:43.723 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:03:43.723 list of memzone associated elements. size: 646.796692 MiB 00:03:43.723 element at address: 0x20001d895500 with size: 211.416748 MiB 00:03:43.723 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:43.723 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:03:43.723 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:43.723 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:03:43.723 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3728707_0 00:03:43.723 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:03:43.723 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3728707_0 00:03:43.723 element at address: 0x200003fff380 with size: 48.003052 MiB 00:03:43.723 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3728707_0 00:03:43.723 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:03:43.723 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3728707_0 00:03:43.723 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:03:43.723 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:43.723 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:03:43.723 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:43.723 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:03:43.723 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3728707 00:03:43.723 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:03:43.723 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3728707 00:03:43.723 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:43.723 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3728707 00:03:43.723 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:03:43.723 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:43.723 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:03:43.723 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:43.723 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:03:43.723 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:43.723 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:03:43.723 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:43.723 element at address: 0x200003eff180 with size: 1.000488 MiB 00:03:43.723 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3728707 00:03:43.723 element at address: 0x200003affc00 with size: 1.000488 MiB 00:03:43.723 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3728707 00:03:43.723 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:03:43.723 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3728707 00:03:43.723 element at address: 0x200034afe940 with size: 1.000488 MiB 00:03:43.723 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3728707 00:03:43.723 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:03:43.723 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3728707 00:03:43.723 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:03:43.723 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3728707 00:03:43.723 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:03:43.723 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:43.723 element at address: 0x20000707b780 with size: 0.500488 MiB 00:03:43.723 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:43.723 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:03:43.723 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:43.723 element at address: 0x200003a5f300 with size: 0.125488 MiB 00:03:43.723 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3728707 00:03:43.723 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:03:43.723 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:43.723 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:03:43.723 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:43.723 element at address: 0x200003a5b040 with size: 0.016113 MiB 00:03:43.723 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3728707 00:03:43.723 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:03:43.723 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:43.723 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:03:43.723 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3728707 00:03:43.723 element at address: 0x200003aff940 with size: 0.000305 MiB 00:03:43.723 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3728707 00:03:43.723 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:03:43.723 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3728707 00:03:43.723 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:03:43.723 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:43.723 15:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:43.723 15:00:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3728707 00:03:43.723 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3728707 ']' 00:03:43.723 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3728707 00:03:43.723 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:03:43.723 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:43.723 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3728707 00:03:43.723 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:43.723 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:43.723 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3728707' 00:03:43.723 killing process with pid 3728707 00:03:43.723 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3728707 00:03:43.724 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3728707 00:03:43.984 00:03:43.984 real 0m1.409s 00:03:43.984 user 0m1.459s 00:03:43.984 sys 0m0.410s 00:03:43.984 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.984 15:00:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:43.984 ************************************ 00:03:43.984 END TEST dpdk_mem_utility 00:03:43.984 ************************************ 00:03:43.984 15:00:53 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:43.984 15:00:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.984 15:00:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.984 15:00:53 -- common/autotest_common.sh@10 -- # set +x 00:03:43.984 ************************************ 00:03:43.984 START TEST event 00:03:43.984 ************************************ 00:03:43.984 15:00:53 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:43.984 * Looking for test storage... 00:03:43.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:43.984 15:00:53 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:43.984 15:00:53 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:43.984 15:00:53 event -- common/autotest_common.sh@1681 -- # lcov --version 00:03:44.244 15:00:53 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:44.244 15:00:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.244 15:00:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.244 15:00:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.244 15:00:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.244 15:00:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.244 15:00:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.244 15:00:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.244 15:00:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.244 15:00:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.244 15:00:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.244 15:00:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.244 15:00:53 event -- scripts/common.sh@344 -- # case "$op" in 00:03:44.244 15:00:53 event -- scripts/common.sh@345 -- # : 1 00:03:44.244 15:00:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.244 15:00:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.244 15:00:53 event -- scripts/common.sh@365 -- # decimal 1 00:03:44.244 15:00:53 event -- scripts/common.sh@353 -- # local d=1 00:03:44.244 15:00:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.244 15:00:53 event -- scripts/common.sh@355 -- # echo 1 00:03:44.244 15:00:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.244 15:00:53 event -- scripts/common.sh@366 -- # decimal 2 00:03:44.244 15:00:53 event -- scripts/common.sh@353 -- # local d=2 00:03:44.244 15:00:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.244 15:00:53 event -- scripts/common.sh@355 -- # echo 2 00:03:44.244 15:00:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.244 15:00:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.244 15:00:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.244 15:00:53 event -- scripts/common.sh@368 -- # return 0 00:03:44.244 15:00:53 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.244 15:00:53 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:44.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.244 --rc genhtml_branch_coverage=1 00:03:44.244 --rc genhtml_function_coverage=1 00:03:44.244 --rc genhtml_legend=1 00:03:44.244 --rc geninfo_all_blocks=1 00:03:44.244 --rc geninfo_unexecuted_blocks=1 00:03:44.244 00:03:44.244 ' 00:03:44.244 15:00:53 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:44.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.244 --rc genhtml_branch_coverage=1 00:03:44.244 --rc genhtml_function_coverage=1 00:03:44.244 --rc genhtml_legend=1 00:03:44.244 --rc geninfo_all_blocks=1 00:03:44.244 --rc geninfo_unexecuted_blocks=1 00:03:44.244 00:03:44.244 ' 00:03:44.244 15:00:53 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:44.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.244 --rc genhtml_branch_coverage=1 00:03:44.244 --rc genhtml_function_coverage=1 00:03:44.244 --rc genhtml_legend=1 00:03:44.244 --rc geninfo_all_blocks=1 00:03:44.244 --rc geninfo_unexecuted_blocks=1 00:03:44.244 00:03:44.244 ' 00:03:44.244 15:00:53 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:44.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.244 --rc genhtml_branch_coverage=1 00:03:44.244 --rc genhtml_function_coverage=1 00:03:44.244 --rc genhtml_legend=1 00:03:44.244 --rc geninfo_all_blocks=1 00:03:44.244 --rc geninfo_unexecuted_blocks=1 00:03:44.244 00:03:44.244 ' 00:03:44.244 15:00:53 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:44.244 15:00:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:03:44.244 15:00:53 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:44.244 15:00:53 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:03:44.244 15:00:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.244 15:00:53 event -- common/autotest_common.sh@10 -- # set +x 00:03:44.245 ************************************ 00:03:44.245 START TEST event_perf 00:03:44.245 ************************************ 00:03:44.245 15:00:53 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:44.245 Running I/O for 1 seconds...[2024-10-01 15:00:53.912422] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:44.245 [2024-10-01 15:00:53.912458] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729062 ] 00:03:44.245 [2024-10-01 15:00:53.967070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:44.245 [2024-10-01 15:00:54.034549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:03:44.245 [2024-10-01 15:00:54.034663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:03:44.245 [2024-10-01 15:00:54.034817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:44.245 Running I/O for 1 seconds...[2024-10-01 15:00:54.034817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:03:45.629 00:03:45.629 lcore 0: 186744 00:03:45.629 lcore 1: 186745 00:03:45.629 lcore 2: 186747 00:03:45.629 lcore 3: 186750 00:03:45.629 done. 00:03:45.629 00:03:45.629 real 0m1.183s 00:03:45.629 user 0m4.115s 00:03:45.629 sys 0m0.065s 00:03:45.629 15:00:55 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.629 15:00:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:03:45.629 ************************************ 00:03:45.629 END TEST event_perf 00:03:45.629 ************************************ 00:03:45.629 15:00:55 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:45.629 15:00:55 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:03:45.629 15:00:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.629 15:00:55 event -- common/autotest_common.sh@10 -- # set +x 00:03:45.629 ************************************ 00:03:45.629 START TEST event_reactor 00:03:45.629 ************************************ 00:03:45.629 15:00:55 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:45.629 [2024-10-01 15:00:55.184174] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:45.629 [2024-10-01 15:00:55.184275] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729412 ] 00:03:45.629 [2024-10-01 15:00:55.247264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.629 [2024-10-01 15:00:55.311478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.570 test_start 00:03:46.570 oneshot 00:03:46.570 tick 100 00:03:46.570 tick 100 00:03:46.570 tick 250 00:03:46.570 tick 100 00:03:46.570 tick 100 00:03:46.570 tick 100 00:03:46.570 tick 250 00:03:46.570 tick 500 00:03:46.570 tick 100 00:03:46.570 tick 100 00:03:46.570 tick 250 00:03:46.570 tick 100 00:03:46.570 tick 100 00:03:46.570 test_end 00:03:46.570 00:03:46.570 real 0m1.200s 00:03:46.570 user 0m1.123s 00:03:46.570 sys 0m0.073s 00:03:46.570 15:00:56 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.570 15:00:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:03:46.570 ************************************ 00:03:46.570 END TEST event_reactor 00:03:46.570 ************************************ 00:03:46.570 15:00:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:46.570 15:00:56 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:03:46.570 15:00:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.570 15:00:56 event -- common/autotest_common.sh@10 -- # set +x 00:03:46.831 ************************************ 00:03:46.831 START TEST event_reactor_perf 00:03:46.831 ************************************ 00:03:46.831 15:00:56 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:46.831 [2024-10-01 15:00:56.459581] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:46.831 [2024-10-01 15:00:56.459677] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729768 ] 00:03:46.831 [2024-10-01 15:00:56.523720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.831 [2024-10-01 15:00:56.588627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.212 test_start 00:03:48.212 test_end 00:03:48.212 Performance: 370594 events per second 00:03:48.212 00:03:48.212 real 0m1.203s 00:03:48.212 user 0m1.129s 00:03:48.212 sys 0m0.071s 00:03:48.212 15:00:57 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.212 15:00:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:03:48.212 ************************************ 00:03:48.212 END TEST event_reactor_perf 00:03:48.212 ************************************ 00:03:48.212 15:00:57 event -- event/event.sh@49 -- # uname -s 00:03:48.212 15:00:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:03:48.212 15:00:57 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:48.212 15:00:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.212 15:00:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.212 15:00:57 event -- common/autotest_common.sh@10 -- # set +x 00:03:48.212 ************************************ 00:03:48.212 START TEST event_scheduler 00:03:48.212 ************************************ 00:03:48.212 15:00:57 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:48.212 * Looking for test storage... 00:03:48.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:03:48.212 15:00:57 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:48.212 15:00:57 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:03:48.212 15:00:57 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:48.212 15:00:57 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.213 15:00:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:03:48.213 15:00:57 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.213 15:00:57 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:48.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.213 --rc genhtml_branch_coverage=1 00:03:48.213 --rc genhtml_function_coverage=1 00:03:48.213 --rc genhtml_legend=1 00:03:48.213 --rc geninfo_all_blocks=1 00:03:48.213 --rc geninfo_unexecuted_blocks=1 00:03:48.213 00:03:48.213 ' 00:03:48.213 15:00:57 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:48.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.213 --rc genhtml_branch_coverage=1 00:03:48.213 --rc genhtml_function_coverage=1 00:03:48.213 --rc genhtml_legend=1 00:03:48.213 --rc geninfo_all_blocks=1 00:03:48.213 --rc geninfo_unexecuted_blocks=1 00:03:48.213 00:03:48.213 ' 00:03:48.213 15:00:57 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:48.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.213 --rc genhtml_branch_coverage=1 00:03:48.213 --rc genhtml_function_coverage=1 00:03:48.213 --rc genhtml_legend=1 00:03:48.213 --rc geninfo_all_blocks=1 00:03:48.213 --rc geninfo_unexecuted_blocks=1 00:03:48.213 00:03:48.213 ' 00:03:48.213 15:00:57 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:48.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.213 --rc genhtml_branch_coverage=1 00:03:48.213 --rc genhtml_function_coverage=1 00:03:48.213 --rc genhtml_legend=1 00:03:48.213 --rc geninfo_all_blocks=1 00:03:48.213 --rc geninfo_unexecuted_blocks=1 00:03:48.213 00:03:48.213 ' 00:03:48.213 15:00:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:03:48.213 15:00:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3730147 00:03:48.213 15:00:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.213 15:00:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3730147 00:03:48.213 15:00:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:03:48.213 15:00:57 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3730147 ']' 00:03:48.213 15:00:57 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.213 15:00:57 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:48.213 15:00:57 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.213 15:00:57 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:48.213 15:00:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:48.213 [2024-10-01 15:00:57.980292] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:48.213 [2024-10-01 15:00:57.980363] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730147 ] 00:03:48.213 [2024-10-01 15:00:58.036349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:48.473 [2024-10-01 15:00:58.102548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.473 [2024-10-01 15:00:58.102709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:03:48.473 [2024-10-01 15:00:58.102864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:03:48.473 [2024-10-01 15:00:58.102866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:03:49.043 15:00:58 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:49.043 15:00:58 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:03:49.043 15:00:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:03:49.043 15:00:58 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.043 15:00:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:49.043 [2024-10-01 15:00:58.797052] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:03:49.043 [2024-10-01 15:00:58.797066] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:03:49.043 [2024-10-01 15:00:58.797073] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:03:49.043 [2024-10-01 15:00:58.797077] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:03:49.043 [2024-10-01 15:00:58.797081] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:03:49.043 15:00:58 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.043 15:00:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:03:49.043 15:00:58 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.043 15:00:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:49.043 [2024-10-01 15:00:58.853805] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:03:49.043 15:00:58 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.043 15:00:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:03:49.043 15:00:58 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.043 15:00:58 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.043 15:00:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:49.043 ************************************ 00:03:49.043 START TEST scheduler_create_thread 00:03:49.043 ************************************ 00:03:49.043 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:03:49.043 15:00:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:03:49.043 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.043 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:49.303 2 00:03:49.303 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.303 15:00:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:03:49.303 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.303 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:49.303 3 00:03:49.303 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:49.304 4 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:49.304 5 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:49.304 6 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:49.304 7 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:49.304 8 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.304 15:00:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:49.563 9 00:03:49.563 15:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:49.563 15:00:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:03:49.563 15:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.563 15:00:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:50.943 10 00:03:50.943 15:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.943 15:01:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:03:50.943 15:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.943 15:01:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:51.884 15:01:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:51.884 15:01:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:03:51.884 15:01:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:03:51.884 15:01:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:51.884 15:01:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:52.453 15:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:52.453 15:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:03:52.453 15:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:52.453 15:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:53.021 15:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.021 15:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:03:53.021 15:01:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:03:53.021 15:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.021 15:01:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:53.590 15:01:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.590 00:03:53.590 real 0m4.466s 00:03:53.590 user 0m0.023s 00:03:53.590 sys 0m0.009s 00:03:53.590 15:01:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.590 15:01:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:53.590 ************************************ 00:03:53.590 END TEST scheduler_create_thread 00:03:53.590 ************************************ 00:03:53.590 15:01:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:03:53.590 15:01:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3730147 00:03:53.590 15:01:03 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3730147 ']' 00:03:53.590 15:01:03 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3730147 00:03:53.590 15:01:03 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:03:53.590 15:01:03 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:53.590 15:01:03 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3730147 00:03:53.850 15:01:03 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:03:53.850 15:01:03 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:03:53.850 15:01:03 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3730147' 00:03:53.850 killing process with pid 3730147 00:03:53.850 15:01:03 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3730147 00:03:53.850 15:01:03 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3730147 00:03:53.850 [2024-10-01 15:01:03.640125] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:03:54.111 00:03:54.111 real 0m6.082s 00:03:54.111 user 0m14.447s 00:03:54.111 sys 0m0.436s 00:03:54.111 15:01:03 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.111 15:01:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:54.111 ************************************ 00:03:54.111 END TEST event_scheduler 00:03:54.111 ************************************ 00:03:54.111 15:01:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:03:54.111 15:01:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:03:54.111 15:01:03 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.111 15:01:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.111 15:01:03 event -- common/autotest_common.sh@10 -- # set +x 00:03:54.111 ************************************ 00:03:54.111 START TEST app_repeat 00:03:54.111 ************************************ 00:03:54.111 15:01:03 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3731226 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3731226' 00:03:54.111 Process app_repeat pid: 3731226 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:03:54.111 spdk_app_start Round 0 00:03:54.111 15:01:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3731226 /var/tmp/spdk-nbd.sock 00:03:54.111 15:01:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3731226 ']' 00:03:54.111 15:01:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:54.111 15:01:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:54.111 15:01:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:54.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:54.111 15:01:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:54.111 15:01:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:54.111 [2024-10-01 15:01:03.927277] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:03:54.111 [2024-10-01 15:01:03.927349] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731226 ] 00:03:54.371 [2024-10-01 15:01:03.990718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:54.371 [2024-10-01 15:01:04.056828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:03:54.371 [2024-10-01 15:01:04.056830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.943 15:01:04 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:54.943 15:01:04 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:03:54.943 15:01:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:55.202 Malloc0 00:03:55.202 15:01:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:55.461 Malloc1 00:03:55.461 15:01:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:55.461 15:01:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:55.461 15:01:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:55.461 15:01:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:03:55.461 15:01:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:55.461 15:01:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:03:55.461 15:01:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:55.461 15:01:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:55.461 15:01:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:55.461 15:01:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:03:55.461 15:01:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:55.461 15:01:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:03:55.461 15:01:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:03:55.461 15:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:03:55.462 15:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:55.462 15:01:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:03:55.462 /dev/nbd0 00:03:55.462 15:01:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:03:55.462 15:01:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:55.462 1+0 records in 00:03:55.462 1+0 records out 00:03:55.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000142181 s, 28.8 MB/s 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:03:55.462 15:01:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:03:55.462 15:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:55.462 15:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:55.462 15:01:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:03:55.722 /dev/nbd1 00:03:55.722 15:01:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:03:55.722 15:01:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:55.722 1+0 records in 00:03:55.722 1+0 records out 00:03:55.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274077 s, 14.9 MB/s 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:03:55.722 15:01:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:03:55.722 15:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:55.722 15:01:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:55.722 15:01:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:55.722 15:01:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:55.722 15:01:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:55.982 15:01:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:03:55.982 { 00:03:55.982 "nbd_device": "/dev/nbd0", 00:03:55.982 "bdev_name": "Malloc0" 00:03:55.982 }, 00:03:55.982 { 00:03:55.982 "nbd_device": "/dev/nbd1", 00:03:55.982 "bdev_name": "Malloc1" 00:03:55.982 } 00:03:55.982 ]' 00:03:55.982 15:01:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:03:55.982 { 00:03:55.982 "nbd_device": "/dev/nbd0", 00:03:55.982 "bdev_name": "Malloc0" 00:03:55.982 }, 00:03:55.982 { 00:03:55.982 "nbd_device": "/dev/nbd1", 00:03:55.982 "bdev_name": "Malloc1" 00:03:55.982 } 00:03:55.982 ]' 00:03:55.982 15:01:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:55.982 15:01:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:03:55.982 /dev/nbd1' 00:03:55.982 15:01:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:03:55.983 /dev/nbd1' 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:03:55.983 256+0 records in 00:03:55.983 256+0 records out 00:03:55.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121363 s, 86.4 MB/s 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:03:55.983 256+0 records in 00:03:55.983 256+0 records out 00:03:55.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158344 s, 66.2 MB/s 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:03:55.983 256+0 records in 00:03:55.983 256+0 records out 00:03:55.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170442 s, 61.5 MB/s 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:55.983 15:01:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:03:56.243 15:01:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:03:56.243 15:01:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:03:56.243 15:01:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:03:56.243 15:01:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:56.243 15:01:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:56.243 15:01:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:03:56.243 15:01:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:56.243 15:01:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:56.243 15:01:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:56.243 15:01:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:03:56.502 15:01:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:03:56.502 15:01:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:03:56.502 15:01:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:03:56.502 15:01:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:56.502 15:01:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:56.502 15:01:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:03:56.502 15:01:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:56.502 15:01:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:56.502 15:01:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:56.502 15:01:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:56.502 15:01:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:56.762 15:01:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:03:56.762 15:01:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:03:56.762 15:01:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:56.762 15:01:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:03:56.762 15:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:03:56.762 15:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:56.762 15:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:03:56.762 15:01:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:03:56.762 15:01:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:03:56.762 15:01:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:03:56.762 15:01:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:03:56.762 15:01:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:03:56.762 15:01:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:03:56.762 15:01:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:03:57.021 [2024-10-01 15:01:06.750258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:57.021 [2024-10-01 15:01:06.814331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:03:57.021 [2024-10-01 15:01:06.814332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.021 [2024-10-01 15:01:06.845912] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:03:57.021 [2024-10-01 15:01:06.845949] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:00.317 15:01:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:00.317 15:01:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:00.317 spdk_app_start Round 1 00:04:00.317 15:01:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3731226 /var/tmp/spdk-nbd.sock 00:04:00.317 15:01:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3731226 ']' 00:04:00.317 15:01:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:00.317 15:01:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:00.317 15:01:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:00.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:00.317 15:01:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:00.317 15:01:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:00.317 15:01:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:00.317 15:01:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:00.317 15:01:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:00.317 Malloc0 00:04:00.317 15:01:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:00.317 Malloc1 00:04:00.317 15:01:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:00.317 15:01:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:00.577 /dev/nbd0 00:04:00.577 15:01:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:00.577 15:01:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:00.577 1+0 records in 00:04:00.577 1+0 records out 00:04:00.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324996 s, 12.6 MB/s 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:00.577 15:01:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:00.577 15:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:00.577 15:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:00.577 15:01:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:00.838 /dev/nbd1 00:04:00.838 15:01:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:00.839 15:01:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:00.839 1+0 records in 00:04:00.839 1+0 records out 00:04:00.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147928 s, 27.7 MB/s 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:00.839 15:01:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:00.839 15:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:00.839 15:01:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:00.839 15:01:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:00.839 15:01:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:00.839 15:01:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:01.099 { 00:04:01.099 "nbd_device": "/dev/nbd0", 00:04:01.099 "bdev_name": "Malloc0" 00:04:01.099 }, 00:04:01.099 { 00:04:01.099 "nbd_device": "/dev/nbd1", 00:04:01.099 "bdev_name": "Malloc1" 00:04:01.099 } 00:04:01.099 ]' 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:01.099 { 00:04:01.099 "nbd_device": "/dev/nbd0", 00:04:01.099 "bdev_name": "Malloc0" 00:04:01.099 }, 00:04:01.099 { 00:04:01.099 "nbd_device": "/dev/nbd1", 00:04:01.099 "bdev_name": "Malloc1" 00:04:01.099 } 00:04:01.099 ]' 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:01.099 /dev/nbd1' 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:01.099 /dev/nbd1' 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:01.099 256+0 records in 00:04:01.099 256+0 records out 00:04:01.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012079 s, 86.8 MB/s 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:01.099 256+0 records in 00:04:01.099 256+0 records out 00:04:01.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216888 s, 48.3 MB/s 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:01.099 256+0 records in 00:04:01.099 256+0 records out 00:04:01.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236177 s, 44.4 MB/s 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:01.099 15:01:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:01.100 15:01:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:01.100 15:01:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:01.100 15:01:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:01.100 15:01:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:01.100 15:01:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:01.100 15:01:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:01.100 15:01:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:01.100 15:01:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:01.360 15:01:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:01.360 15:01:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:01.360 15:01:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:01.360 15:01:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:01.360 15:01:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:01.360 15:01:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:01.360 15:01:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:01.360 15:01:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:01.360 15:01:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:01.360 15:01:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:01.620 15:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:01.881 15:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:01.881 15:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:01.881 15:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:01.881 15:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:01.881 15:01:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:01.881 15:01:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:01.881 15:01:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:01.881 15:01:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:01.881 15:01:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:01.881 15:01:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:02.142 [2024-10-01 15:01:11.799778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:02.142 [2024-10-01 15:01:11.862806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.142 [2024-10-01 15:01:11.862809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.142 [2024-10-01 15:01:11.895197] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:02.142 [2024-10-01 15:01:11.895235] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:05.438 15:01:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:05.438 15:01:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:05.438 spdk_app_start Round 2 00:04:05.438 15:01:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3731226 /var/tmp/spdk-nbd.sock 00:04:05.438 15:01:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3731226 ']' 00:04:05.438 15:01:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:05.438 15:01:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:05.438 15:01:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:05.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:05.438 15:01:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:05.438 15:01:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:05.438 15:01:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:05.438 15:01:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:05.438 15:01:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:05.438 Malloc0 00:04:05.438 15:01:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:05.438 Malloc1 00:04:05.438 15:01:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:05.438 15:01:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:05.699 /dev/nbd0 00:04:05.699 15:01:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:05.699 15:01:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:05.699 1+0 records in 00:04:05.699 1+0 records out 00:04:05.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191095 s, 21.4 MB/s 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:05.699 15:01:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:05.699 15:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:05.699 15:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:05.699 15:01:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:06.005 /dev/nbd1 00:04:06.005 15:01:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:06.005 15:01:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:06.005 1+0 records in 00:04:06.005 1+0 records out 00:04:06.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199888 s, 20.5 MB/s 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:06.005 15:01:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:06.005 15:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:06.005 15:01:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:06.005 15:01:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:06.005 15:01:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:06.006 15:01:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:06.006 15:01:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:06.006 { 00:04:06.006 "nbd_device": "/dev/nbd0", 00:04:06.006 "bdev_name": "Malloc0" 00:04:06.006 }, 00:04:06.006 { 00:04:06.006 "nbd_device": "/dev/nbd1", 00:04:06.006 "bdev_name": "Malloc1" 00:04:06.006 } 00:04:06.006 ]' 00:04:06.006 15:01:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:06.006 { 00:04:06.006 "nbd_device": "/dev/nbd0", 00:04:06.006 "bdev_name": "Malloc0" 00:04:06.006 }, 00:04:06.006 { 00:04:06.006 "nbd_device": "/dev/nbd1", 00:04:06.006 "bdev_name": "Malloc1" 00:04:06.006 } 00:04:06.006 ]' 00:04:06.006 15:01:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:06.328 /dev/nbd1' 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:06.328 /dev/nbd1' 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:06.328 256+0 records in 00:04:06.328 256+0 records out 00:04:06.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00307848 s, 341 MB/s 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:06.328 15:01:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:06.328 256+0 records in 00:04:06.328 256+0 records out 00:04:06.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156797 s, 66.9 MB/s 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:06.329 256+0 records in 00:04:06.329 256+0 records out 00:04:06.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258172 s, 40.6 MB/s 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:06.329 15:01:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:06.329 15:01:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:06.329 15:01:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:06.329 15:01:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:06.329 15:01:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:06.329 15:01:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:06.329 15:01:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:06.329 15:01:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:06.329 15:01:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:06.329 15:01:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:06.329 15:01:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:06.605 15:01:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:06.605 15:01:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:06.605 15:01:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:06.605 15:01:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:06.605 15:01:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:06.605 15:01:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:06.605 15:01:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:06.605 15:01:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:06.605 15:01:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:06.605 15:01:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:06.605 15:01:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:06.865 15:01:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:06.866 15:01:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:06.866 15:01:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:06.866 15:01:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:06.866 15:01:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:06.866 15:01:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:06.866 15:01:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:06.866 15:01:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:06.866 15:01:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:06.866 15:01:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:06.866 15:01:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:06.866 15:01:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:06.866 15:01:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:07.127 15:01:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:07.127 [2024-10-01 15:01:16.861916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:07.127 [2024-10-01 15:01:16.925616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.127 [2024-10-01 15:01:16.925619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.127 [2024-10-01 15:01:16.957254] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:07.127 [2024-10-01 15:01:16.957287] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:10.431 15:01:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3731226 /var/tmp/spdk-nbd.sock 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3731226 ']' 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:10.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:10.431 15:01:19 event.app_repeat -- event/event.sh@39 -- # killprocess 3731226 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3731226 ']' 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3731226 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3731226 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3731226' 00:04:10.431 killing process with pid 3731226 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3731226 00:04:10.431 15:01:19 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3731226 00:04:10.431 spdk_app_start is called in Round 0. 00:04:10.431 Shutdown signal received, stop current app iteration 00:04:10.431 Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 reinitialization... 00:04:10.431 spdk_app_start is called in Round 1. 00:04:10.431 Shutdown signal received, stop current app iteration 00:04:10.431 Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 reinitialization... 00:04:10.431 spdk_app_start is called in Round 2. 00:04:10.431 Shutdown signal received, stop current app iteration 00:04:10.431 Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 reinitialization... 00:04:10.431 spdk_app_start is called in Round 3. 00:04:10.431 Shutdown signal received, stop current app iteration 00:04:10.431 15:01:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:10.431 15:01:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:10.431 00:04:10.431 real 0m16.194s 00:04:10.431 user 0m35.127s 00:04:10.431 sys 0m2.281s 00:04:10.431 15:01:20 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.431 15:01:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:10.431 ************************************ 00:04:10.431 END TEST app_repeat 00:04:10.431 ************************************ 00:04:10.431 15:01:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:10.431 15:01:20 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:10.431 15:01:20 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.431 15:01:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.431 15:01:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.431 ************************************ 00:04:10.431 START TEST cpu_locks 00:04:10.431 ************************************ 00:04:10.431 15:01:20 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:10.431 * Looking for test storage... 00:04:10.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:10.431 15:01:20 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:10.431 15:01:20 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:04:10.431 15:01:20 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:10.693 15:01:20 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.693 15:01:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:10.693 15:01:20 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.693 15:01:20 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:10.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.693 --rc genhtml_branch_coverage=1 00:04:10.693 --rc genhtml_function_coverage=1 00:04:10.693 --rc genhtml_legend=1 00:04:10.693 --rc geninfo_all_blocks=1 00:04:10.693 --rc geninfo_unexecuted_blocks=1 00:04:10.693 00:04:10.693 ' 00:04:10.693 15:01:20 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:10.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.693 --rc genhtml_branch_coverage=1 00:04:10.693 --rc genhtml_function_coverage=1 00:04:10.693 --rc genhtml_legend=1 00:04:10.693 --rc geninfo_all_blocks=1 00:04:10.693 --rc geninfo_unexecuted_blocks=1 00:04:10.693 00:04:10.693 ' 00:04:10.693 15:01:20 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:10.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.693 --rc genhtml_branch_coverage=1 00:04:10.693 --rc genhtml_function_coverage=1 00:04:10.693 --rc genhtml_legend=1 00:04:10.693 --rc geninfo_all_blocks=1 00:04:10.693 --rc geninfo_unexecuted_blocks=1 00:04:10.693 00:04:10.693 ' 00:04:10.693 15:01:20 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:10.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.693 --rc genhtml_branch_coverage=1 00:04:10.693 --rc genhtml_function_coverage=1 00:04:10.693 --rc genhtml_legend=1 00:04:10.693 --rc geninfo_all_blocks=1 00:04:10.693 --rc geninfo_unexecuted_blocks=1 00:04:10.693 00:04:10.693 ' 00:04:10.693 15:01:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:10.693 15:01:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:10.694 15:01:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:10.694 15:01:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:10.694 15:01:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.694 15:01:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.694 15:01:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:10.694 ************************************ 00:04:10.694 START TEST default_locks 00:04:10.694 ************************************ 00:04:10.694 15:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:10.694 15:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3734826 00:04:10.694 15:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3734826 00:04:10.694 15:01:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:10.694 15:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3734826 ']' 00:04:10.694 15:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.694 15:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:10.694 15:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.694 15:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:10.694 15:01:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:10.694 [2024-10-01 15:01:20.463789] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:10.694 [2024-10-01 15:01:20.463846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3734826 ] 00:04:10.694 [2024-10-01 15:01:20.524944] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.955 [2024-10-01 15:01:20.589479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.526 15:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:11.526 15:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:11.526 15:01:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3734826 00:04:11.526 15:01:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:11.526 15:01:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3734826 00:04:12.097 lslocks: write error 00:04:12.097 15:01:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3734826 00:04:12.097 15:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3734826 ']' 00:04:12.097 15:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3734826 00:04:12.097 15:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:12.097 15:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:12.098 15:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3734826 00:04:12.098 15:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:12.098 15:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:12.098 15:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3734826' 00:04:12.098 killing process with pid 3734826 00:04:12.098 15:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3734826 00:04:12.098 15:01:21 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3734826 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3734826 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3734826 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3734826 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3734826 ']' 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:12.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3734826) - No such process 00:04:12.358 ERROR: process (pid: 3734826) is no longer running 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:12.358 00:04:12.358 real 0m1.759s 00:04:12.358 user 0m1.887s 00:04:12.358 sys 0m0.610s 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.358 15:01:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:12.358 ************************************ 00:04:12.358 END TEST default_locks 00:04:12.358 ************************************ 00:04:12.358 15:01:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:12.358 15:01:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.358 15:01:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.358 15:01:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:12.619 ************************************ 00:04:12.619 START TEST default_locks_via_rpc 00:04:12.619 ************************************ 00:04:12.619 15:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:12.619 15:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3735194 00:04:12.619 15:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3735194 00:04:12.620 15:01:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:12.620 15:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3735194 ']' 00:04:12.620 15:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.620 15:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:12.620 15:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.620 15:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:12.620 15:01:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.620 [2024-10-01 15:01:22.297954] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:12.620 [2024-10-01 15:01:22.298012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735194 ] 00:04:12.620 [2024-10-01 15:01:22.358099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.620 [2024-10-01 15:01:22.421630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3735194 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3735194 00:04:13.561 15:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:13.822 15:01:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3735194 00:04:13.822 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3735194 ']' 00:04:13.822 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3735194 00:04:13.822 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:13.822 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:13.822 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3735194 00:04:14.083 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:14.083 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:14.083 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3735194' 00:04:14.083 killing process with pid 3735194 00:04:14.083 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3735194 00:04:14.083 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3735194 00:04:14.083 00:04:14.083 real 0m1.679s 00:04:14.083 user 0m1.826s 00:04:14.083 sys 0m0.566s 00:04:14.083 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.083 15:01:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.083 ************************************ 00:04:14.083 END TEST default_locks_via_rpc 00:04:14.083 ************************************ 00:04:14.344 15:01:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:14.344 15:01:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.344 15:01:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.344 15:01:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:14.344 ************************************ 00:04:14.344 START TEST non_locking_app_on_locked_coremask 00:04:14.344 ************************************ 00:04:14.344 15:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:14.344 15:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3735558 00:04:14.344 15:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3735558 /var/tmp/spdk.sock 00:04:14.344 15:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:14.344 15:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3735558 ']' 00:04:14.344 15:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.344 15:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:14.344 15:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.344 15:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:14.344 15:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:14.344 [2024-10-01 15:01:24.042612] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:14.344 [2024-10-01 15:01:24.042664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735558 ] 00:04:14.344 [2024-10-01 15:01:24.103935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.344 [2024-10-01 15:01:24.170756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.289 15:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:15.289 15:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:15.289 15:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3735814 00:04:15.289 15:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3735814 /var/tmp/spdk2.sock 00:04:15.289 15:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3735814 ']' 00:04:15.289 15:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:15.289 15:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:15.289 15:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:15.289 15:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:15.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:15.289 15:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:15.289 15:01:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:15.289 [2024-10-01 15:01:24.885746] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:15.289 [2024-10-01 15:01:24.885801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3735814 ] 00:04:15.289 [2024-10-01 15:01:24.972893] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:15.289 [2024-10-01 15:01:24.972922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.289 [2024-10-01 15:01:25.106415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.861 15:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:15.861 15:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:15.861 15:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3735558 00:04:15.861 15:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:15.861 15:01:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3735558 00:04:16.434 lslocks: write error 00:04:16.434 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3735558 00:04:16.434 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3735558 ']' 00:04:16.434 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3735558 00:04:16.434 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:16.434 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.434 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3735558 00:04:16.434 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.434 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.434 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3735558' 00:04:16.434 killing process with pid 3735558 00:04:16.434 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3735558 00:04:16.434 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3735558 00:04:17.005 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3735814 00:04:17.005 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3735814 ']' 00:04:17.005 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3735814 00:04:17.005 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:17.005 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:17.005 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3735814 00:04:17.005 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:17.005 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:17.005 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3735814' 00:04:17.005 killing process with pid 3735814 00:04:17.005 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3735814 00:04:17.005 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3735814 00:04:17.265 00:04:17.265 real 0m2.978s 00:04:17.265 user 0m3.291s 00:04:17.265 sys 0m0.904s 00:04:17.265 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.265 15:01:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:17.265 ************************************ 00:04:17.265 END TEST non_locking_app_on_locked_coremask 00:04:17.265 ************************************ 00:04:17.265 15:01:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:17.265 15:01:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.265 15:01:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.265 15:01:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:17.265 ************************************ 00:04:17.265 START TEST locking_app_on_unlocked_coremask 00:04:17.265 ************************************ 00:04:17.265 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:17.265 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3736265 00:04:17.265 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3736265 /var/tmp/spdk.sock 00:04:17.265 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:17.265 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3736265 ']' 00:04:17.265 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.265 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:17.265 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.265 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:17.265 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:17.265 [2024-10-01 15:01:27.097564] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:17.266 [2024-10-01 15:01:27.097613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736265 ] 00:04:17.526 [2024-10-01 15:01:27.158065] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:17.526 [2024-10-01 15:01:27.158096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.526 [2024-10-01 15:01:27.222486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.095 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:18.095 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:18.095 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3736379 00:04:18.095 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3736379 /var/tmp/spdk2.sock 00:04:18.095 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3736379 ']' 00:04:18.095 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:18.095 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:18.095 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:18.095 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:18.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:18.095 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:18.095 15:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:18.095 [2024-10-01 15:01:27.950852] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:18.095 [2024-10-01 15:01:27.950911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736379 ] 00:04:18.355 [2024-10-01 15:01:28.042873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.355 [2024-10-01 15:01:28.172551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.928 15:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:18.928 15:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:18.928 15:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3736379 00:04:18.928 15:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3736379 00:04:18.928 15:01:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:19.869 lslocks: write error 00:04:19.869 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3736265 00:04:19.869 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3736265 ']' 00:04:19.869 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3736265 00:04:19.869 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:19.869 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:19.869 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3736265 00:04:19.869 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:19.869 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:19.869 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3736265' 00:04:19.869 killing process with pid 3736265 00:04:19.869 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3736265 00:04:19.869 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3736265 00:04:20.129 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3736379 00:04:20.129 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3736379 ']' 00:04:20.129 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3736379 00:04:20.129 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:20.129 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:20.129 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3736379 00:04:20.129 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:20.129 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:20.129 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3736379' 00:04:20.129 killing process with pid 3736379 00:04:20.129 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3736379 00:04:20.129 15:01:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3736379 00:04:20.390 00:04:20.390 real 0m3.139s 00:04:20.390 user 0m3.473s 00:04:20.390 sys 0m0.956s 00:04:20.390 15:01:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.390 15:01:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:20.390 ************************************ 00:04:20.390 END TEST locking_app_on_unlocked_coremask 00:04:20.390 ************************************ 00:04:20.390 15:01:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:20.390 15:01:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.390 15:01:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.390 15:01:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:20.650 ************************************ 00:04:20.651 START TEST locking_app_on_locked_coremask 00:04:20.651 ************************************ 00:04:20.651 15:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:04:20.651 15:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3736976 00:04:20.651 15:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3736976 /var/tmp/spdk.sock 00:04:20.651 15:01:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.651 15:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3736976 ']' 00:04:20.651 15:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.651 15:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.651 15:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.651 15:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.651 15:01:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:20.651 [2024-10-01 15:01:30.311641] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:20.651 [2024-10-01 15:01:30.311693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3736976 ] 00:04:20.651 [2024-10-01 15:01:30.373162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.651 [2024-10-01 15:01:30.438825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3737025 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3737025 /var/tmp/spdk2.sock 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3737025 /var/tmp/spdk2.sock 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3737025 /var/tmp/spdk2.sock 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3737025 ']' 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:21.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:21.592 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:21.592 [2024-10-01 15:01:31.168867] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:21.592 [2024-10-01 15:01:31.168919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737025 ] 00:04:21.592 [2024-10-01 15:01:31.263129] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3736976 has claimed it. 00:04:21.592 [2024-10-01 15:01:31.263171] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:22.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3737025) - No such process 00:04:22.163 ERROR: process (pid: 3737025) is no longer running 00:04:22.163 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:22.163 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:22.163 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:22.163 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:22.163 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:22.163 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:22.163 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3736976 00:04:22.163 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3736976 00:04:22.163 15:01:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:22.734 lslocks: write error 00:04:22.734 15:01:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3736976 00:04:22.734 15:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3736976 ']' 00:04:22.734 15:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3736976 00:04:22.735 15:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:22.735 15:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:22.735 15:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3736976 00:04:22.735 15:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:22.735 15:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:22.735 15:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3736976' 00:04:22.735 killing process with pid 3736976 00:04:22.735 15:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3736976 00:04:22.735 15:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3736976 00:04:22.995 00:04:22.995 real 0m2.341s 00:04:22.995 user 0m2.648s 00:04:22.995 sys 0m0.659s 00:04:22.995 15:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.995 15:01:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:22.995 ************************************ 00:04:22.995 END TEST locking_app_on_locked_coremask 00:04:22.995 ************************************ 00:04:22.995 15:01:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:22.995 15:01:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.995 15:01:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.995 15:01:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:22.995 ************************************ 00:04:22.995 START TEST locking_overlapped_coremask 00:04:22.995 ************************************ 00:04:22.995 15:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:04:22.995 15:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3737377 00:04:22.995 15:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3737377 /var/tmp/spdk.sock 00:04:22.995 15:01:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:22.995 15:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3737377 ']' 00:04:22.995 15:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.995 15:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:22.995 15:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.995 15:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:22.995 15:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:22.995 [2024-10-01 15:01:32.727383] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:22.995 [2024-10-01 15:01:32.727440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737377 ] 00:04:22.995 [2024-10-01 15:01:32.792188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:23.255 [2024-10-01 15:01:32.868056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.255 [2024-10-01 15:01:32.868182] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:23.255 [2024-10-01 15:01:32.868184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3737704 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3737704 /var/tmp/spdk2.sock 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3737704 /var/tmp/spdk2.sock 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3737704 /var/tmp/spdk2.sock 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3737704 ']' 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:23.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.827 15:01:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:23.827 [2024-10-01 15:01:33.582848] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:23.827 [2024-10-01 15:01:33.582901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737704 ] 00:04:23.827 [2024-10-01 15:01:33.656764] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3737377 has claimed it. 00:04:23.827 [2024-10-01 15:01:33.656796] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:24.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3737704) - No such process 00:04:24.397 ERROR: process (pid: 3737704) is no longer running 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3737377 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3737377 ']' 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3737377 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:24.397 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3737377 00:04:24.658 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:24.658 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:24.658 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3737377' 00:04:24.658 killing process with pid 3737377 00:04:24.658 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3737377 00:04:24.658 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3737377 00:04:24.658 00:04:24.658 real 0m1.828s 00:04:24.658 user 0m5.196s 00:04:24.658 sys 0m0.395s 00:04:24.658 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.658 15:01:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:24.658 ************************************ 00:04:24.658 END TEST locking_overlapped_coremask 00:04:24.658 ************************************ 00:04:24.918 15:01:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:24.918 15:01:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.918 15:01:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.918 15:01:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:24.918 ************************************ 00:04:24.918 START TEST locking_overlapped_coremask_via_rpc 00:04:24.918 ************************************ 00:04:24.918 15:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:04:24.918 15:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3737807 00:04:24.918 15:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3737807 /var/tmp/spdk.sock 00:04:24.918 15:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:24.918 15:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3737807 ']' 00:04:24.918 15:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.918 15:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.918 15:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.918 15:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.918 15:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.918 [2024-10-01 15:01:34.615581] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:24.918 [2024-10-01 15:01:34.615633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737807 ] 00:04:24.918 [2024-10-01 15:01:34.677309] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:24.918 [2024-10-01 15:01:34.677340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:24.918 [2024-10-01 15:01:34.747941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.918 [2024-10-01 15:01:34.748088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:24.918 [2024-10-01 15:01:34.748259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.859 15:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:25.859 15:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:25.859 15:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3738079 00:04:25.859 15:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3738079 /var/tmp/spdk2.sock 00:04:25.859 15:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3738079 ']' 00:04:25.859 15:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:25.859 15:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:25.859 15:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:25.859 15:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:25.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:25.859 15:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:25.859 15:01:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.859 [2024-10-01 15:01:35.470928] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:25.859 [2024-10-01 15:01:35.470983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738079 ] 00:04:25.859 [2024-10-01 15:01:35.541870] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:25.859 [2024-10-01 15:01:35.541891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:25.859 [2024-10-01 15:01:35.652125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:25.859 [2024-10-01 15:01:35.656122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.859 [2024-10-01 15:01:35.656124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.430 [2024-10-01 15:01:36.276057] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3737807 has claimed it. 00:04:26.430 request: 00:04:26.430 { 00:04:26.430 "method": "framework_enable_cpumask_locks", 00:04:26.430 "req_id": 1 00:04:26.430 } 00:04:26.430 Got JSON-RPC error response 00:04:26.430 response: 00:04:26.430 { 00:04:26.430 "code": -32603, 00:04:26.430 "message": "Failed to claim CPU core: 2" 00:04:26.430 } 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3737807 /var/tmp/spdk.sock 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3737807 ']' 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:26.430 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.691 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:26.691 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:26.691 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3738079 /var/tmp/spdk2.sock 00:04:26.691 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3738079 ']' 00:04:26.691 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:26.691 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:26.691 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:26.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:26.691 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:26.691 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.952 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:26.952 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:26.952 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:26.952 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:26.952 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:26.952 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:26.952 00:04:26.952 real 0m2.092s 00:04:26.952 user 0m0.857s 00:04:26.952 sys 0m0.156s 00:04:26.952 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.952 15:01:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.952 ************************************ 00:04:26.952 END TEST locking_overlapped_coremask_via_rpc 00:04:26.952 ************************************ 00:04:26.952 15:01:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:26.952 15:01:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3737807 ]] 00:04:26.952 15:01:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3737807 00:04:26.952 15:01:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3737807 ']' 00:04:26.952 15:01:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3737807 00:04:26.952 15:01:36 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:26.952 15:01:36 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:26.952 15:01:36 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3737807 00:04:26.952 15:01:36 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:26.952 15:01:36 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:26.952 15:01:36 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3737807' 00:04:26.952 killing process with pid 3737807 00:04:26.952 15:01:36 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3737807 00:04:26.952 15:01:36 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3737807 00:04:27.213 15:01:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3738079 ]] 00:04:27.213 15:01:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3738079 00:04:27.213 15:01:36 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3738079 ']' 00:04:27.213 15:01:36 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3738079 00:04:27.213 15:01:36 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:27.213 15:01:36 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:27.213 15:01:37 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3738079 00:04:27.213 15:01:37 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:27.213 15:01:37 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:27.213 15:01:37 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3738079' 00:04:27.213 killing process with pid 3738079 00:04:27.213 15:01:37 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3738079 00:04:27.213 15:01:37 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3738079 00:04:27.473 15:01:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:27.473 15:01:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:27.473 15:01:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3737807 ]] 00:04:27.473 15:01:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3737807 00:04:27.473 15:01:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3737807 ']' 00:04:27.473 15:01:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3737807 00:04:27.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3737807) - No such process 00:04:27.473 15:01:37 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3737807 is not found' 00:04:27.473 Process with pid 3737807 is not found 00:04:27.473 15:01:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3738079 ]] 00:04:27.473 15:01:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3738079 00:04:27.473 15:01:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3738079 ']' 00:04:27.473 15:01:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3738079 00:04:27.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3738079) - No such process 00:04:27.473 15:01:37 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3738079 is not found' 00:04:27.473 Process with pid 3738079 is not found 00:04:27.473 15:01:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:27.473 00:04:27.473 real 0m17.115s 00:04:27.473 user 0m29.434s 00:04:27.473 sys 0m5.165s 00:04:27.473 15:01:37 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.473 15:01:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:27.473 ************************************ 00:04:27.473 END TEST cpu_locks 00:04:27.473 ************************************ 00:04:27.473 00:04:27.473 real 0m43.640s 00:04:27.473 user 1m25.668s 00:04:27.473 sys 0m8.491s 00:04:27.473 15:01:37 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.473 15:01:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:27.473 ************************************ 00:04:27.473 END TEST event 00:04:27.473 ************************************ 00:04:27.733 15:01:37 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:27.733 15:01:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.733 15:01:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.733 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:04:27.733 ************************************ 00:04:27.733 START TEST thread 00:04:27.733 ************************************ 00:04:27.733 15:01:37 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:27.733 * Looking for test storage... 00:04:27.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:27.733 15:01:37 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:27.733 15:01:37 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:04:27.733 15:01:37 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:27.733 15:01:37 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:27.733 15:01:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.733 15:01:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.733 15:01:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.733 15:01:37 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.733 15:01:37 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.733 15:01:37 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.733 15:01:37 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.733 15:01:37 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.733 15:01:37 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.733 15:01:37 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.733 15:01:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.733 15:01:37 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:27.733 15:01:37 thread -- scripts/common.sh@345 -- # : 1 00:04:27.733 15:01:37 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.733 15:01:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.733 15:01:37 thread -- scripts/common.sh@365 -- # decimal 1 00:04:27.733 15:01:37 thread -- scripts/common.sh@353 -- # local d=1 00:04:27.733 15:01:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.733 15:01:37 thread -- scripts/common.sh@355 -- # echo 1 00:04:27.733 15:01:37 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.733 15:01:37 thread -- scripts/common.sh@366 -- # decimal 2 00:04:27.733 15:01:37 thread -- scripts/common.sh@353 -- # local d=2 00:04:27.733 15:01:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.733 15:01:37 thread -- scripts/common.sh@355 -- # echo 2 00:04:27.733 15:01:37 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.733 15:01:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.733 15:01:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.733 15:01:37 thread -- scripts/common.sh@368 -- # return 0 00:04:27.733 15:01:37 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.733 15:01:37 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:27.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.733 --rc genhtml_branch_coverage=1 00:04:27.733 --rc genhtml_function_coverage=1 00:04:27.733 --rc genhtml_legend=1 00:04:27.733 --rc geninfo_all_blocks=1 00:04:27.733 --rc geninfo_unexecuted_blocks=1 00:04:27.733 00:04:27.733 ' 00:04:27.733 15:01:37 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:27.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.733 --rc genhtml_branch_coverage=1 00:04:27.733 --rc genhtml_function_coverage=1 00:04:27.733 --rc genhtml_legend=1 00:04:27.733 --rc geninfo_all_blocks=1 00:04:27.733 --rc geninfo_unexecuted_blocks=1 00:04:27.733 00:04:27.733 ' 00:04:27.733 15:01:37 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:27.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.733 --rc genhtml_branch_coverage=1 00:04:27.733 --rc genhtml_function_coverage=1 00:04:27.733 --rc genhtml_legend=1 00:04:27.733 --rc geninfo_all_blocks=1 00:04:27.733 --rc geninfo_unexecuted_blocks=1 00:04:27.733 00:04:27.733 ' 00:04:27.733 15:01:37 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:27.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.733 --rc genhtml_branch_coverage=1 00:04:27.733 --rc genhtml_function_coverage=1 00:04:27.733 --rc genhtml_legend=1 00:04:27.733 --rc geninfo_all_blocks=1 00:04:27.733 --rc geninfo_unexecuted_blocks=1 00:04:27.733 00:04:27.733 ' 00:04:27.733 15:01:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:27.733 15:01:37 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:27.733 15:01:37 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.733 15:01:37 thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.993 ************************************ 00:04:27.993 START TEST thread_poller_perf 00:04:27.993 ************************************ 00:04:27.993 15:01:37 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:27.993 [2024-10-01 15:01:37.650705] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:27.993 [2024-10-01 15:01:37.650819] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738532 ] 00:04:27.993 [2024-10-01 15:01:37.718007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.993 [2024-10-01 15:01:37.792564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.993 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:29.378 ====================================== 00:04:29.378 busy:2408684934 (cyc) 00:04:29.378 total_run_count: 285000 00:04:29.378 tsc_hz: 2400000000 (cyc) 00:04:29.378 ====================================== 00:04:29.378 poller_cost: 8451 (cyc), 3521 (nsec) 00:04:29.378 00:04:29.378 real 0m1.226s 00:04:29.378 user 0m1.136s 00:04:29.378 sys 0m0.085s 00:04:29.378 15:01:38 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.378 15:01:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:29.378 ************************************ 00:04:29.378 END TEST thread_poller_perf 00:04:29.378 ************************************ 00:04:29.378 15:01:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:29.378 15:01:38 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:29.378 15:01:38 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.378 15:01:38 thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.378 ************************************ 00:04:29.378 START TEST thread_poller_perf 00:04:29.378 ************************************ 00:04:29.378 15:01:38 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:29.378 [2024-10-01 15:01:38.952009] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:29.378 [2024-10-01 15:01:38.952113] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738883 ] 00:04:29.378 [2024-10-01 15:01:39.015388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.378 [2024-10-01 15:01:39.080133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.378 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:30.322 ====================================== 00:04:30.322 busy:2401727366 (cyc) 00:04:30.322 total_run_count: 3799000 00:04:30.322 tsc_hz: 2400000000 (cyc) 00:04:30.322 ====================================== 00:04:30.322 poller_cost: 632 (cyc), 263 (nsec) 00:04:30.322 00:04:30.322 real 0m1.206s 00:04:30.322 user 0m1.131s 00:04:30.322 sys 0m0.071s 00:04:30.322 15:01:40 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.323 15:01:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:30.323 ************************************ 00:04:30.323 END TEST thread_poller_perf 00:04:30.323 ************************************ 00:04:30.323 15:01:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:30.323 00:04:30.323 real 0m2.788s 00:04:30.323 user 0m2.440s 00:04:30.323 sys 0m0.361s 00:04:30.323 15:01:40 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.323 15:01:40 thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.323 ************************************ 00:04:30.323 END TEST thread 00:04:30.323 ************************************ 00:04:30.585 15:01:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:30.585 15:01:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:30.585 15:01:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.585 15:01:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.585 15:01:40 -- common/autotest_common.sh@10 -- # set +x 00:04:30.585 ************************************ 00:04:30.585 START TEST app_cmdline 00:04:30.585 ************************************ 00:04:30.585 15:01:40 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:30.585 * Looking for test storage... 00:04:30.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:30.585 15:01:40 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:30.585 15:01:40 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:04:30.585 15:01:40 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:30.585 15:01:40 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:30.585 15:01:40 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.846 15:01:40 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:30.846 15:01:40 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:30.846 15:01:40 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.846 15:01:40 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:30.846 15:01:40 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.846 15:01:40 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.846 15:01:40 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.846 15:01:40 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:30.846 15:01:40 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.846 15:01:40 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:30.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.846 --rc genhtml_branch_coverage=1 00:04:30.846 --rc genhtml_function_coverage=1 00:04:30.846 --rc genhtml_legend=1 00:04:30.846 --rc geninfo_all_blocks=1 00:04:30.846 --rc geninfo_unexecuted_blocks=1 00:04:30.846 00:04:30.846 ' 00:04:30.846 15:01:40 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:30.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.846 --rc genhtml_branch_coverage=1 00:04:30.846 --rc genhtml_function_coverage=1 00:04:30.846 --rc genhtml_legend=1 00:04:30.846 --rc geninfo_all_blocks=1 00:04:30.846 --rc geninfo_unexecuted_blocks=1 00:04:30.846 00:04:30.846 ' 00:04:30.846 15:01:40 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:30.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.846 --rc genhtml_branch_coverage=1 00:04:30.846 --rc genhtml_function_coverage=1 00:04:30.846 --rc genhtml_legend=1 00:04:30.846 --rc geninfo_all_blocks=1 00:04:30.846 --rc geninfo_unexecuted_blocks=1 00:04:30.846 00:04:30.846 ' 00:04:30.846 15:01:40 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:30.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.846 --rc genhtml_branch_coverage=1 00:04:30.846 --rc genhtml_function_coverage=1 00:04:30.846 --rc genhtml_legend=1 00:04:30.846 --rc geninfo_all_blocks=1 00:04:30.846 --rc geninfo_unexecuted_blocks=1 00:04:30.846 00:04:30.846 ' 00:04:30.846 15:01:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:30.846 15:01:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3739281 00:04:30.846 15:01:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3739281 00:04:30.846 15:01:40 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3739281 ']' 00:04:30.846 15:01:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:30.846 15:01:40 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.846 15:01:40 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.846 15:01:40 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.847 15:01:40 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.847 15:01:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:30.847 [2024-10-01 15:01:40.522723] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:30.847 [2024-10-01 15:01:40.522781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739281 ] 00:04:30.847 [2024-10-01 15:01:40.584551] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.847 [2024-10-01 15:01:40.649015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:04:31.790 15:01:41 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:31.790 { 00:04:31.790 "version": "SPDK v25.01-pre git sha1 fefe29c8c", 00:04:31.790 "fields": { 00:04:31.790 "major": 25, 00:04:31.790 "minor": 1, 00:04:31.790 "patch": 0, 00:04:31.790 "suffix": "-pre", 00:04:31.790 "commit": "fefe29c8c" 00:04:31.790 } 00:04:31.790 } 00:04:31.790 15:01:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:31.790 15:01:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:31.790 15:01:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:31.790 15:01:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:31.790 15:01:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:31.790 15:01:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:31.790 15:01:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.790 15:01:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:31.790 15:01:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:31.790 15:01:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:31.790 15:01:41 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:32.052 request: 00:04:32.052 { 00:04:32.052 "method": "env_dpdk_get_mem_stats", 00:04:32.052 "req_id": 1 00:04:32.052 } 00:04:32.052 Got JSON-RPC error response 00:04:32.052 response: 00:04:32.052 { 00:04:32.052 "code": -32601, 00:04:32.052 "message": "Method not found" 00:04:32.052 } 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:32.052 15:01:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3739281 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3739281 ']' 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3739281 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3739281 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3739281' 00:04:32.052 killing process with pid 3739281 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@969 -- # kill 3739281 00:04:32.052 15:01:41 app_cmdline -- common/autotest_common.sh@974 -- # wait 3739281 00:04:32.313 00:04:32.313 real 0m1.770s 00:04:32.313 user 0m2.139s 00:04:32.313 sys 0m0.450s 00:04:32.313 15:01:42 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.313 15:01:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:32.313 ************************************ 00:04:32.313 END TEST app_cmdline 00:04:32.313 ************************************ 00:04:32.313 15:01:42 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:32.313 15:01:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.313 15:01:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.313 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:04:32.313 ************************************ 00:04:32.313 START TEST version 00:04:32.313 ************************************ 00:04:32.313 15:01:42 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:32.574 * Looking for test storage... 00:04:32.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:32.574 15:01:42 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:32.574 15:01:42 version -- common/autotest_common.sh@1681 -- # lcov --version 00:04:32.574 15:01:42 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:32.574 15:01:42 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:32.574 15:01:42 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.574 15:01:42 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.574 15:01:42 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.574 15:01:42 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.574 15:01:42 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.574 15:01:42 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.574 15:01:42 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.574 15:01:42 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.574 15:01:42 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.574 15:01:42 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.574 15:01:42 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.574 15:01:42 version -- scripts/common.sh@344 -- # case "$op" in 00:04:32.574 15:01:42 version -- scripts/common.sh@345 -- # : 1 00:04:32.574 15:01:42 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.574 15:01:42 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.574 15:01:42 version -- scripts/common.sh@365 -- # decimal 1 00:04:32.574 15:01:42 version -- scripts/common.sh@353 -- # local d=1 00:04:32.574 15:01:42 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.574 15:01:42 version -- scripts/common.sh@355 -- # echo 1 00:04:32.574 15:01:42 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.574 15:01:42 version -- scripts/common.sh@366 -- # decimal 2 00:04:32.574 15:01:42 version -- scripts/common.sh@353 -- # local d=2 00:04:32.574 15:01:42 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.574 15:01:42 version -- scripts/common.sh@355 -- # echo 2 00:04:32.574 15:01:42 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.574 15:01:42 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.574 15:01:42 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.574 15:01:42 version -- scripts/common.sh@368 -- # return 0 00:04:32.574 15:01:42 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.574 15:01:42 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:32.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.574 --rc genhtml_branch_coverage=1 00:04:32.574 --rc genhtml_function_coverage=1 00:04:32.574 --rc genhtml_legend=1 00:04:32.574 --rc geninfo_all_blocks=1 00:04:32.574 --rc geninfo_unexecuted_blocks=1 00:04:32.574 00:04:32.574 ' 00:04:32.574 15:01:42 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:32.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.574 --rc genhtml_branch_coverage=1 00:04:32.574 --rc genhtml_function_coverage=1 00:04:32.574 --rc genhtml_legend=1 00:04:32.574 --rc geninfo_all_blocks=1 00:04:32.574 --rc geninfo_unexecuted_blocks=1 00:04:32.574 00:04:32.574 ' 00:04:32.574 15:01:42 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:32.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.574 --rc genhtml_branch_coverage=1 00:04:32.574 --rc genhtml_function_coverage=1 00:04:32.574 --rc genhtml_legend=1 00:04:32.574 --rc geninfo_all_blocks=1 00:04:32.574 --rc geninfo_unexecuted_blocks=1 00:04:32.574 00:04:32.574 ' 00:04:32.574 15:01:42 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:32.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.574 --rc genhtml_branch_coverage=1 00:04:32.574 --rc genhtml_function_coverage=1 00:04:32.574 --rc genhtml_legend=1 00:04:32.574 --rc geninfo_all_blocks=1 00:04:32.574 --rc geninfo_unexecuted_blocks=1 00:04:32.574 00:04:32.574 ' 00:04:32.574 15:01:42 version -- app/version.sh@17 -- # get_header_version major 00:04:32.574 15:01:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:32.574 15:01:42 version -- app/version.sh@14 -- # cut -f2 00:04:32.574 15:01:42 version -- app/version.sh@14 -- # tr -d '"' 00:04:32.574 15:01:42 version -- app/version.sh@17 -- # major=25 00:04:32.574 15:01:42 version -- app/version.sh@18 -- # get_header_version minor 00:04:32.574 15:01:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:32.574 15:01:42 version -- app/version.sh@14 -- # cut -f2 00:04:32.574 15:01:42 version -- app/version.sh@14 -- # tr -d '"' 00:04:32.574 15:01:42 version -- app/version.sh@18 -- # minor=1 00:04:32.574 15:01:42 version -- app/version.sh@19 -- # get_header_version patch 00:04:32.574 15:01:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:32.574 15:01:42 version -- app/version.sh@14 -- # cut -f2 00:04:32.574 15:01:42 version -- app/version.sh@14 -- # tr -d '"' 00:04:32.574 15:01:42 version -- app/version.sh@19 -- # patch=0 00:04:32.574 15:01:42 version -- app/version.sh@20 -- # get_header_version suffix 00:04:32.574 15:01:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:32.574 15:01:42 version -- app/version.sh@14 -- # cut -f2 00:04:32.574 15:01:42 version -- app/version.sh@14 -- # tr -d '"' 00:04:32.574 15:01:42 version -- app/version.sh@20 -- # suffix=-pre 00:04:32.574 15:01:42 version -- app/version.sh@22 -- # version=25.1 00:04:32.574 15:01:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:32.574 15:01:42 version -- app/version.sh@28 -- # version=25.1rc0 00:04:32.574 15:01:42 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:32.574 15:01:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:32.574 15:01:42 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:32.574 15:01:42 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:32.574 00:04:32.574 real 0m0.281s 00:04:32.574 user 0m0.164s 00:04:32.574 sys 0m0.166s 00:04:32.574 15:01:42 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.574 15:01:42 version -- common/autotest_common.sh@10 -- # set +x 00:04:32.574 ************************************ 00:04:32.574 END TEST version 00:04:32.574 ************************************ 00:04:32.574 15:01:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:32.574 15:01:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:32.574 15:01:42 -- spdk/autotest.sh@194 -- # uname -s 00:04:32.574 15:01:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:32.574 15:01:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:32.574 15:01:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:32.574 15:01:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:32.574 15:01:42 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:04:32.574 15:01:42 -- spdk/autotest.sh@256 -- # timing_exit lib 00:04:32.574 15:01:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.574 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:04:32.836 15:01:42 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:04:32.836 15:01:42 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:04:32.836 15:01:42 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:04:32.836 15:01:42 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:04:32.836 15:01:42 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:04:32.836 15:01:42 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:04:32.836 15:01:42 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:32.836 15:01:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:32.836 15:01:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.836 15:01:42 -- common/autotest_common.sh@10 -- # set +x 00:04:32.836 ************************************ 00:04:32.836 START TEST nvmf_tcp 00:04:32.836 ************************************ 00:04:32.836 15:01:42 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:32.836 * Looking for test storage... 00:04:32.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:32.836 15:01:42 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:32.836 15:01:42 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:32.836 15:01:42 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:32.836 15:01:42 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:32.836 15:01:42 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.836 15:01:42 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.836 15:01:42 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.836 15:01:42 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.836 15:01:42 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.836 15:01:42 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.836 15:01:42 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.836 15:01:42 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.836 15:01:42 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.836 15:01:42 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.836 15:01:42 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.837 15:01:42 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:32.837 15:01:42 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:32.837 15:01:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.837 15:01:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.837 15:01:42 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:33.099 15:01:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:33.099 15:01:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.099 15:01:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:33.099 15:01:42 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.099 15:01:42 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:33.099 15:01:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:33.099 15:01:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.099 15:01:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:33.099 15:01:42 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.099 15:01:42 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.099 15:01:42 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.099 15:01:42 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:33.099 15:01:42 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.099 15:01:42 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:33.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.099 --rc genhtml_branch_coverage=1 00:04:33.099 --rc genhtml_function_coverage=1 00:04:33.099 --rc genhtml_legend=1 00:04:33.099 --rc geninfo_all_blocks=1 00:04:33.099 --rc geninfo_unexecuted_blocks=1 00:04:33.099 00:04:33.099 ' 00:04:33.099 15:01:42 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:33.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.099 --rc genhtml_branch_coverage=1 00:04:33.099 --rc genhtml_function_coverage=1 00:04:33.099 --rc genhtml_legend=1 00:04:33.099 --rc geninfo_all_blocks=1 00:04:33.099 --rc geninfo_unexecuted_blocks=1 00:04:33.099 00:04:33.099 ' 00:04:33.099 15:01:42 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:33.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.099 --rc genhtml_branch_coverage=1 00:04:33.099 --rc genhtml_function_coverage=1 00:04:33.099 --rc genhtml_legend=1 00:04:33.099 --rc geninfo_all_blocks=1 00:04:33.099 --rc geninfo_unexecuted_blocks=1 00:04:33.099 00:04:33.099 ' 00:04:33.099 15:01:42 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:33.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.099 --rc genhtml_branch_coverage=1 00:04:33.099 --rc genhtml_function_coverage=1 00:04:33.099 --rc genhtml_legend=1 00:04:33.099 --rc geninfo_all_blocks=1 00:04:33.099 --rc geninfo_unexecuted_blocks=1 00:04:33.099 00:04:33.099 ' 00:04:33.099 15:01:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:33.099 15:01:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:33.099 15:01:42 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:33.099 15:01:42 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:33.099 15:01:42 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.099 15:01:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.099 ************************************ 00:04:33.099 START TEST nvmf_target_core 00:04:33.099 ************************************ 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:33.099 * Looking for test storage... 00:04:33.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.099 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:33.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.100 --rc genhtml_branch_coverage=1 00:04:33.100 --rc genhtml_function_coverage=1 00:04:33.100 --rc genhtml_legend=1 00:04:33.100 --rc geninfo_all_blocks=1 00:04:33.100 --rc geninfo_unexecuted_blocks=1 00:04:33.100 00:04:33.100 ' 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:33.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.100 --rc genhtml_branch_coverage=1 00:04:33.100 --rc genhtml_function_coverage=1 00:04:33.100 --rc genhtml_legend=1 00:04:33.100 --rc geninfo_all_blocks=1 00:04:33.100 --rc geninfo_unexecuted_blocks=1 00:04:33.100 00:04:33.100 ' 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:33.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.100 --rc genhtml_branch_coverage=1 00:04:33.100 --rc genhtml_function_coverage=1 00:04:33.100 --rc genhtml_legend=1 00:04:33.100 --rc geninfo_all_blocks=1 00:04:33.100 --rc geninfo_unexecuted_blocks=1 00:04:33.100 00:04:33.100 ' 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:33.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.100 --rc genhtml_branch_coverage=1 00:04:33.100 --rc genhtml_function_coverage=1 00:04:33.100 --rc genhtml_legend=1 00:04:33.100 --rc geninfo_all_blocks=1 00:04:33.100 --rc geninfo_unexecuted_blocks=1 00:04:33.100 00:04:33.100 ' 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.100 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:33.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.362 15:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:33.362 ************************************ 00:04:33.362 START TEST nvmf_abort 00:04:33.362 ************************************ 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:33.362 * Looking for test storage... 00:04:33.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.362 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:33.363 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:33.363 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.363 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.363 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:33.363 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:33.363 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.363 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:33.363 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.363 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:33.363 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:33.363 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.363 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:33.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.624 --rc genhtml_branch_coverage=1 00:04:33.624 --rc genhtml_function_coverage=1 00:04:33.624 --rc genhtml_legend=1 00:04:33.624 --rc geninfo_all_blocks=1 00:04:33.624 --rc geninfo_unexecuted_blocks=1 00:04:33.624 00:04:33.624 ' 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:33.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.624 --rc genhtml_branch_coverage=1 00:04:33.624 --rc genhtml_function_coverage=1 00:04:33.624 --rc genhtml_legend=1 00:04:33.624 --rc geninfo_all_blocks=1 00:04:33.624 --rc geninfo_unexecuted_blocks=1 00:04:33.624 00:04:33.624 ' 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:33.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.624 --rc genhtml_branch_coverage=1 00:04:33.624 --rc genhtml_function_coverage=1 00:04:33.624 --rc genhtml_legend=1 00:04:33.624 --rc geninfo_all_blocks=1 00:04:33.624 --rc geninfo_unexecuted_blocks=1 00:04:33.624 00:04:33.624 ' 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:33.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.624 --rc genhtml_branch_coverage=1 00:04:33.624 --rc genhtml_function_coverage=1 00:04:33.624 --rc genhtml_legend=1 00:04:33.624 --rc geninfo_all_blocks=1 00:04:33.624 --rc geninfo_unexecuted_blocks=1 00:04:33.624 00:04:33.624 ' 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.624 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:33.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # prepare_net_devs 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@434 -- # local -g is_hw=no 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # remove_spdk_ns 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:33.625 15:01:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:04:41.775 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:04:41.775 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:04:41.775 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:04:41.776 Found net devices under 0000:4b:00.0: cvl_0_0 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ up == up ]] 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:04:41.776 Found net devices under 0000:4b:00.1: cvl_0_1 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # is_hw=yes 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:41.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:41.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:04:41.776 00:04:41.776 --- 10.0.0.2 ping statistics --- 00:04:41.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:41.776 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:41.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:41.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:04:41.776 00:04:41.776 --- 10.0.0.1 ping statistics --- 00:04:41.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:41.776 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # return 0 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # nvmfpid=3743770 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # waitforlisten 3743770 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3743770 ']' 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.776 15:01:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:41.776 [2024-10-01 15:01:50.795815] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:41.776 [2024-10-01 15:01:50.795865] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:41.776 [2024-10-01 15:01:50.877537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:41.776 [2024-10-01 15:01:50.948288] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:41.776 [2024-10-01 15:01:50.948332] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:41.776 [2024-10-01 15:01:50.948340] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:41.776 [2024-10-01 15:01:50.948346] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:41.776 [2024-10-01 15:01:50.948352] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:41.776 [2024-10-01 15:01:50.948463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.776 [2024-10-01 15:01:50.948625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.776 [2024-10-01 15:01:50.948626] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:41.776 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.776 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:04:41.776 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:04:41.776 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:41.776 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:42.038 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:42.038 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:42.038 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.038 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:42.038 [2024-10-01 15:01:51.643369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.038 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.038 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:42.038 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.038 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:42.038 Malloc0 00:04:42.038 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.038 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:42.038 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.038 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:42.038 Delay0 00:04:42.038 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:42.039 [2024-10-01 15:01:51.729772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.039 15:01:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:42.039 [2024-10-01 15:01:51.890138] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:44.585 Initializing NVMe Controllers 00:04:44.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:44.585 controller IO queue size 128 less than required 00:04:44.585 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:44.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:44.585 Initialization complete. Launching workers. 00:04:44.585 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29098 00:04:44.585 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29159, failed to submit 62 00:04:44.585 success 29102, unsuccessful 57, failed 0 00:04:44.585 15:01:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:44.585 15:01:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.585 15:01:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:44.585 15:01:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.585 15:01:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:44.585 15:01:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:44.585 15:01:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # nvmfcleanup 00:04:44.585 15:01:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:44.585 15:01:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:44.585 15:01:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:44.585 15:01:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:44.585 15:01:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:44.585 rmmod nvme_tcp 00:04:44.585 rmmod nvme_fabrics 00:04:44.585 rmmod nvme_keyring 00:04:44.585 15:01:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@513 -- # '[' -n 3743770 ']' 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # killprocess 3743770 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3743770 ']' 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3743770 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3743770 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3743770' 00:04:44.585 killing process with pid 3743770 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3743770 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3743770 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-save 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@787 -- # iptables-restore 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:44.585 15:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:46.501 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:46.501 00:04:46.501 real 0m13.295s 00:04:46.501 user 0m13.748s 00:04:46.501 sys 0m6.481s 00:04:46.501 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.501 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.501 ************************************ 00:04:46.501 END TEST nvmf_abort 00:04:46.501 ************************************ 00:04:46.501 15:01:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:46.501 15:01:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:46.501 15:01:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.501 15:01:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:46.763 ************************************ 00:04:46.763 START TEST nvmf_ns_hotplug_stress 00:04:46.763 ************************************ 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:46.763 * Looking for test storage... 00:04:46.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:46.763 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:46.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.764 --rc genhtml_branch_coverage=1 00:04:46.764 --rc genhtml_function_coverage=1 00:04:46.764 --rc genhtml_legend=1 00:04:46.764 --rc geninfo_all_blocks=1 00:04:46.764 --rc geninfo_unexecuted_blocks=1 00:04:46.764 00:04:46.764 ' 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:46.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.764 --rc genhtml_branch_coverage=1 00:04:46.764 --rc genhtml_function_coverage=1 00:04:46.764 --rc genhtml_legend=1 00:04:46.764 --rc geninfo_all_blocks=1 00:04:46.764 --rc geninfo_unexecuted_blocks=1 00:04:46.764 00:04:46.764 ' 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:46.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.764 --rc genhtml_branch_coverage=1 00:04:46.764 --rc genhtml_function_coverage=1 00:04:46.764 --rc genhtml_legend=1 00:04:46.764 --rc geninfo_all_blocks=1 00:04:46.764 --rc geninfo_unexecuted_blocks=1 00:04:46.764 00:04:46.764 ' 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:46.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.764 --rc genhtml_branch_coverage=1 00:04:46.764 --rc genhtml_function_coverage=1 00:04:46.764 --rc genhtml_legend=1 00:04:46.764 --rc geninfo_all_blocks=1 00:04:46.764 --rc geninfo_unexecuted_blocks=1 00:04:46.764 00:04:46.764 ' 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:46.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:46.764 15:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:54.939 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:04:54.940 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:04:54.940 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:04:54.940 Found net devices under 0000:4b:00.0: cvl_0_0 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:04:54.940 Found net devices under 0000:4b:00.1: cvl_0_1 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:54.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:54.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:04:54.940 00:04:54.940 --- 10.0.0.2 ping statistics --- 00:04:54.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:54.940 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:54.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:54.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:04:54.940 00:04:54.940 --- 10.0.0.1 ping statistics --- 00:04:54.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:54.940 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # return 0 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:04:54.940 15:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:04:54.940 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:04:54.940 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:04:54.941 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.941 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:54.941 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # nvmfpid=3748625 00:04:54.941 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # waitforlisten 3748625 00:04:54.941 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3748625 ']' 00:04:54.941 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.941 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.941 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.941 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.941 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:54.941 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:54.941 [2024-10-01 15:02:04.069239] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:04:54.941 [2024-10-01 15:02:04.069304] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:54.941 [2024-10-01 15:02:04.156891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:54.941 [2024-10-01 15:02:04.248258] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:54.941 [2024-10-01 15:02:04.248310] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:54.941 [2024-10-01 15:02:04.248318] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:54.941 [2024-10-01 15:02:04.248325] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:54.941 [2024-10-01 15:02:04.248331] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:54.941 [2024-10-01 15:02:04.248458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.941 [2024-10-01 15:02:04.248617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.941 [2024-10-01 15:02:04.248618] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.202 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.202 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:04:55.202 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:04:55.202 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.202 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:55.202 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:55.202 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:04:55.202 15:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:04:55.463 [2024-10-01 15:02:05.078786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:55.463 15:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:04:55.463 15:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:04:55.724 [2024-10-01 15:02:05.452061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:55.724 15:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:55.983 15:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:04:55.984 Malloc0 00:04:56.244 15:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:56.244 Delay0 00:04:56.244 15:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:56.504 15:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:04:56.764 NULL1 00:04:56.764 15:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:04:56.764 15:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:04:56.764 15:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3749188 00:04:56.764 15:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:04:56.764 15:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:57.025 15:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:57.285 15:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:04:57.285 15:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:04:57.285 true 00:04:57.285 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:04:57.285 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:57.545 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:57.805 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:04:57.805 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:04:57.805 true 00:04:58.092 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:04:58.092 15:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:59.110 Read completed with error (sct=0, sc=11) 00:04:59.110 15:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:59.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:59.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:59.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:59.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:59.110 15:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:04:59.110 15:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:04:59.370 true 00:04:59.370 15:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:04:59.370 15:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:00.309 15:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:00.309 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.568 15:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:00.568 15:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:00.568 true 00:05:00.568 15:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:00.568 15:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:00.828 15:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:01.087 15:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:01.087 15:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:01.087 true 00:05:01.087 15:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:01.087 15:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:02.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.470 15:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:02.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.470 15:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:02.470 15:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:02.730 true 00:05:02.730 15:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:02.730 15:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:03.671 15:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:03.671 15:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:03.671 15:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:03.931 true 00:05:03.931 15:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:03.931 15:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:03.931 15:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:04.192 15:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:04.192 15:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:04.453 true 00:05:04.453 15:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:04.453 15:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:05.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.393 15:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:05.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.655 15:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:05.655 15:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:05.916 true 00:05:05.916 15:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:05.916 15:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.860 15:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:06.860 15:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:06.860 15:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:07.120 true 00:05:07.120 15:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:07.120 15:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.120 15:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.381 15:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:07.382 15:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:07.643 true 00:05:07.643 15:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:07.643 15:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.028 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.028 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:09.028 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:09.028 true 00:05:09.028 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:09.028 15:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.978 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.978 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.978 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.240 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:10.240 15:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:10.240 true 00:05:10.240 15:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:10.240 15:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.501 15:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.762 15:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:10.762 15:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:10.762 true 00:05:10.762 15:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:10.762 15:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.023 15:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.283 15:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:11.283 15:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:11.283 true 00:05:11.544 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:11.544 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.544 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.804 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:11.804 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:12.064 true 00:05:12.064 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:12.064 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.064 15:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.325 15:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:12.325 15:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:12.587 true 00:05:12.587 15:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:12.587 15:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.587 15:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.848 15:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:12.848 15:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:13.109 true 00:05:13.109 15:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:13.109 15:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.495 15:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.495 15:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:14.495 15:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:14.495 true 00:05:14.495 15:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:14.495 15:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.441 15:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.701 15:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:15.701 15:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:15.701 true 00:05:15.701 15:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:15.701 15:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.963 15:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.224 15:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:16.224 15:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:16.224 true 00:05:16.224 15:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:16.224 15:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.486 15:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.746 15:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:16.746 15:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:16.746 true 00:05:17.007 15:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:17.007 15:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.007 15:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.267 15:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:17.267 15:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:17.528 true 00:05:17.528 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:17.528 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.528 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.789 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:17.789 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:18.050 true 00:05:18.050 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:18.050 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.050 15:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.311 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:18.311 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:18.572 true 00:05:18.572 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:18.572 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.832 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.832 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:18.832 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:19.092 true 00:05:19.092 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:19.092 15:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.033 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.033 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:20.033 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:20.292 true 00:05:20.292 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:20.292 15:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.552 15:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.552 15:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:20.552 15:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:20.811 true 00:05:20.811 15:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:20.811 15:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.138 15:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.138 15:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:21.138 15:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:21.398 true 00:05:21.398 15:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:21.398 15:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.657 15:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.657 15:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:21.657 15:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:21.917 true 00:05:21.917 15:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:21.917 15:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.302 15:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.302 15:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:23.302 15:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:23.302 true 00:05:23.302 15:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:23.302 15:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.242 15:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.512 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:24.512 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:24.512 true 00:05:24.797 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:24.797 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.797 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.061 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:25.061 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:25.061 true 00:05:25.061 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:25.062 15:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.321 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.582 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:25.582 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:25.582 true 00:05:25.842 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:25.842 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.842 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.102 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:26.102 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:26.362 true 00:05:26.362 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:26.362 15:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.301 Initializing NVMe Controllers 00:05:27.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:27.301 Controller IO queue size 128, less than required. 00:05:27.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:27.301 Controller IO queue size 128, less than required. 00:05:27.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:27.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:27.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:27.301 Initialization complete. Launching workers. 00:05:27.301 ======================================================== 00:05:27.301 Latency(us) 00:05:27.301 Device Information : IOPS MiB/s Average min max 00:05:27.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1798.00 0.88 37950.58 2164.97 1051318.99 00:05:27.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15704.67 7.67 8123.39 1426.87 400303.73 00:05:27.301 ======================================================== 00:05:27.301 Total : 17502.67 8.55 11187.45 1426.87 1051318.99 00:05:27.301 00:05:27.301 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.561 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:27.561 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:27.821 true 00:05:27.821 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3749188 00:05:27.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3749188) - No such process 00:05:27.821 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3749188 00:05:27.821 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.821 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:28.080 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:28.081 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:28.081 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:28.081 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:28.081 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:28.340 null0 00:05:28.340 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:28.340 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:28.340 15:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:28.340 null1 00:05:28.340 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:28.340 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:28.340 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:28.600 null2 00:05:28.600 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:28.600 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:28.600 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:28.859 null3 00:05:28.859 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:28.859 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:28.859 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:28.859 null4 00:05:28.859 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:28.859 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:28.859 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:29.119 null5 00:05:29.119 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:29.119 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:29.119 15:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:29.378 null6 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:29.378 null7 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3755715 3755716 3755718 3755720 3755722 3755724 3755726 3755728 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.378 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:29.638 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:29.638 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.638 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:29.638 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:29.638 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:29.638 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:29.638 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:29.638 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:29.898 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:30.165 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:30.165 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:30.165 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.165 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:30.165 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:30.165 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:30.165 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:30.165 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.165 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.165 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:30.165 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.165 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.165 15:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:30.165 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.165 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.165 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:30.165 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.165 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.165 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:30.165 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.165 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.165 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:30.425 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:30.685 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:30.945 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:31.206 15:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:31.206 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:31.206 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:31.206 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.206 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.206 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:31.467 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.727 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:31.987 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.247 15:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:32.247 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:32.247 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.247 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:32.247 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:32.247 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:32.247 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:32.247 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.247 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.247 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:32.507 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:32.507 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.507 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.507 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:32.507 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.507 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.507 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:32.507 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.507 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.507 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:32.507 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:32.507 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.507 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.508 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:32.508 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.508 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.508 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:32.508 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.508 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.508 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:32.508 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.508 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.508 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:32.768 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:33.028 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:33.289 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.289 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.289 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.289 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.289 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.289 15:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:33.289 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:33.289 rmmod nvme_tcp 00:05:33.289 rmmod nvme_fabrics 00:05:33.550 rmmod nvme_keyring 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@513 -- # '[' -n 3748625 ']' 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # killprocess 3748625 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3748625 ']' 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3748625 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3748625 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3748625' 00:05:33.550 killing process with pid 3748625 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3748625 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3748625 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-restore 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # iptables-save 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:33.550 15:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:36.092 00:05:36.092 real 0m49.075s 00:05:36.092 user 3m14.738s 00:05:36.092 sys 0m16.020s 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:36.092 ************************************ 00:05:36.092 END TEST nvmf_ns_hotplug_stress 00:05:36.092 ************************************ 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:36.092 ************************************ 00:05:36.092 START TEST nvmf_delete_subsystem 00:05:36.092 ************************************ 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:36.092 * Looking for test storage... 00:05:36.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:36.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.092 --rc genhtml_branch_coverage=1 00:05:36.092 --rc genhtml_function_coverage=1 00:05:36.092 --rc genhtml_legend=1 00:05:36.092 --rc geninfo_all_blocks=1 00:05:36.092 --rc geninfo_unexecuted_blocks=1 00:05:36.092 00:05:36.092 ' 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:36.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.092 --rc genhtml_branch_coverage=1 00:05:36.092 --rc genhtml_function_coverage=1 00:05:36.092 --rc genhtml_legend=1 00:05:36.092 --rc geninfo_all_blocks=1 00:05:36.092 --rc geninfo_unexecuted_blocks=1 00:05:36.092 00:05:36.092 ' 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:36.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.092 --rc genhtml_branch_coverage=1 00:05:36.092 --rc genhtml_function_coverage=1 00:05:36.092 --rc genhtml_legend=1 00:05:36.092 --rc geninfo_all_blocks=1 00:05:36.092 --rc geninfo_unexecuted_blocks=1 00:05:36.092 00:05:36.092 ' 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:36.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.092 --rc genhtml_branch_coverage=1 00:05:36.092 --rc genhtml_function_coverage=1 00:05:36.092 --rc genhtml_legend=1 00:05:36.092 --rc geninfo_all_blocks=1 00:05:36.092 --rc geninfo_unexecuted_blocks=1 00:05:36.092 00:05:36.092 ' 00:05:36.092 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:36.093 15:02:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:42.683 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:42.684 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:42.684 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:42.684 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:42.684 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # is_hw=yes 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:42.684 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:42.944 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:42.944 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:42.945 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:42.945 15:02:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:43.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:43.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:05:43.205 00:05:43.205 --- 10.0.0.2 ping statistics --- 00:05:43.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:43.205 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:43.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:43.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:05:43.205 00:05:43.205 --- 10.0.0.1 ping statistics --- 00:05:43.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:43.205 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # return 0 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:05:43.205 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:05:43.466 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:43.466 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:05:43.466 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.466 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:43.466 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # nvmfpid=3761114 00:05:43.466 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # waitforlisten 3761114 00:05:43.466 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:43.466 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3761114 ']' 00:05:43.466 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.466 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.466 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.466 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.466 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:43.466 [2024-10-01 15:02:53.167035] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:05:43.466 [2024-10-01 15:02:53.167090] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:43.466 [2024-10-01 15:02:53.234193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.466 [2024-10-01 15:02:53.298774] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:43.466 [2024-10-01 15:02:53.298813] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:43.466 [2024-10-01 15:02:53.298821] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:43.466 [2024-10-01 15:02:53.298828] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:43.466 [2024-10-01 15:02:53.298834] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:43.467 [2024-10-01 15:02:53.298971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.467 [2024-10-01 15:02:53.298971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.409 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.409 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:05:44.409 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:05:44.409 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.409 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.409 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:44.409 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:44.409 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.409 15:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.409 [2024-10-01 15:02:53.999612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.409 [2024-10-01 15:02:54.015802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.409 NULL1 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.409 Delay0 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3761236 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:44.409 15:02:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:44.409 [2024-10-01 15:02:54.100590] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:46.321 15:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:46.321 15:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.321 15:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:46.581 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 [2024-10-01 15:02:56.227108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074750 is same with the state(6) to be set 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Write completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 Read completed with error (sct=0, sc=8) 00:05:46.582 starting I/O failed: -6 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 [2024-10-01 15:02:56.231966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fef6400d450 is same with the state(6) to be set 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Write completed with error (sct=0, sc=8) 00:05:46.583 Read completed with error (sct=0, sc=8) 00:05:47.524 [2024-10-01 15:02:57.199264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1075a70 is same with the state(6) to be set 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 [2024-10-01 15:02:57.230497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074930 is same with the state(6) to be set 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 [2024-10-01 15:02:57.230868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1074570 is same with the state(6) to be set 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 [2024-10-01 15:02:57.233812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fef6400cfe0 is same with the state(6) to be set 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Read completed with error (sct=0, sc=8) 00:05:47.524 Write completed with error (sct=0, sc=8) 00:05:47.524 [2024-10-01 15:02:57.234493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fef6400d780 is same with the state(6) to be set 00:05:47.524 Initializing NVMe Controllers 00:05:47.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:47.524 Controller IO queue size 128, less than required. 00:05:47.524 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:47.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:47.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:47.524 Initialization complete. Launching workers. 00:05:47.524 ======================================================== 00:05:47.524 Latency(us) 00:05:47.524 Device Information : IOPS MiB/s Average min max 00:05:47.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.40 0.08 903570.88 222.96 1005924.89 00:05:47.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.43 0.08 943986.33 277.62 2001235.06 00:05:47.524 ======================================================== 00:05:47.524 Total : 321.83 0.16 923215.54 222.96 2001235.06 00:05:47.524 00:05:47.524 [2024-10-01 15:02:57.235031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1075a70 (9): Bad file descriptor 00:05:47.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:47.524 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.524 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:47.524 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3761236 00:05:47.524 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3761236 00:05:48.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3761236) - No such process 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3761236 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3761236 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3761236 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.095 [2024-10-01 15:02:57.764457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3761928 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3761928 00:05:48.095 15:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:48.095 [2024-10-01 15:02:57.845724] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:48.667 15:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:48.667 15:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3761928 00:05:48.667 15:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:49.238 15:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:49.238 15:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3761928 00:05:49.238 15:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:49.497 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:49.497 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3761928 00:05:49.497 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:50.068 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:50.068 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3761928 00:05:50.068 15:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:50.638 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:50.639 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3761928 00:05:50.639 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:51.209 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:51.209 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3761928 00:05:51.209 15:03:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:51.209 Initializing NVMe Controllers 00:05:51.209 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:51.209 Controller IO queue size 128, less than required. 00:05:51.209 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:51.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:51.209 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:51.209 Initialization complete. Launching workers. 00:05:51.209 ======================================================== 00:05:51.209 Latency(us) 00:05:51.209 Device Information : IOPS MiB/s Average min max 00:05:51.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002184.66 1000136.24 1041729.25 00:05:51.209 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002942.58 1000269.39 1009621.31 00:05:51.209 ======================================================== 00:05:51.209 Total : 256.00 0.12 1002563.62 1000136.24 1041729.25 00:05:51.209 00:05:51.469 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:51.469 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3761928 00:05:51.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3761928) - No such process 00:05:51.469 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3761928 00:05:51.469 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:51.469 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:05:51.469 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:05:51.469 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:05:51.469 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:51.469 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:05:51.469 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:51.469 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:51.469 rmmod nvme_tcp 00:05:51.730 rmmod nvme_fabrics 00:05:51.730 rmmod nvme_keyring 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@513 -- # '[' -n 3761114 ']' 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # killprocess 3761114 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3761114 ']' 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3761114 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3761114 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3761114' 00:05:51.730 killing process with pid 3761114 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3761114 00:05:51.730 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3761114 00:05:51.991 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:05:51.991 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:05:51.991 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:05:51.991 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:05:51.991 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-save 00:05:51.991 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:05:51.991 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@787 -- # iptables-restore 00:05:51.991 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:51.991 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:51.991 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.991 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.991 15:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:53.901 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:53.901 00:05:53.901 real 0m18.172s 00:05:53.901 user 0m30.561s 00:05:53.901 sys 0m6.446s 00:05:53.901 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.901 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.901 ************************************ 00:05:53.901 END TEST nvmf_delete_subsystem 00:05:53.901 ************************************ 00:05:53.901 15:03:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:53.902 15:03:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:53.902 15:03:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.902 15:03:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:54.162 ************************************ 00:05:54.162 START TEST nvmf_host_management 00:05:54.162 ************************************ 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:54.162 * Looking for test storage... 00:05:54.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.162 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:54.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.163 --rc genhtml_branch_coverage=1 00:05:54.163 --rc genhtml_function_coverage=1 00:05:54.163 --rc genhtml_legend=1 00:05:54.163 --rc geninfo_all_blocks=1 00:05:54.163 --rc geninfo_unexecuted_blocks=1 00:05:54.163 00:05:54.163 ' 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:54.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.163 --rc genhtml_branch_coverage=1 00:05:54.163 --rc genhtml_function_coverage=1 00:05:54.163 --rc genhtml_legend=1 00:05:54.163 --rc geninfo_all_blocks=1 00:05:54.163 --rc geninfo_unexecuted_blocks=1 00:05:54.163 00:05:54.163 ' 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:54.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.163 --rc genhtml_branch_coverage=1 00:05:54.163 --rc genhtml_function_coverage=1 00:05:54.163 --rc genhtml_legend=1 00:05:54.163 --rc geninfo_all_blocks=1 00:05:54.163 --rc geninfo_unexecuted_blocks=1 00:05:54.163 00:05:54.163 ' 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:54.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.163 --rc genhtml_branch_coverage=1 00:05:54.163 --rc genhtml_function_coverage=1 00:05:54.163 --rc genhtml_legend=1 00:05:54.163 --rc geninfo_all_blocks=1 00:05:54.163 --rc geninfo_unexecuted_blocks=1 00:05:54.163 00:05:54.163 ' 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:54.163 15:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:54.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:05:54.163 15:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:02.307 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:02.308 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:02.308 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:02.308 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:02.308 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # is_hw=yes 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:02.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:02.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:06:02.308 00:06:02.308 --- 10.0.0.2 ping statistics --- 00:06:02.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:02.308 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:06:02.308 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:02.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:02.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:06:02.308 00:06:02.308 --- 10.0.0.1 ping statistics --- 00:06:02.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:02.309 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # return 0 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=3767514 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 3767514 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3767514 ']' 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.309 [2024-10-01 15:03:11.423400] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:06:02.309 [2024-10-01 15:03:11.423460] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:02.309 [2024-10-01 15:03:11.482615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.309 [2024-10-01 15:03:11.539652] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:02.309 [2024-10-01 15:03:11.539685] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:02.309 [2024-10-01 15:03:11.539691] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:02.309 [2024-10-01 15:03:11.539696] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:02.309 [2024-10-01 15:03:11.539700] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:02.309 [2024-10-01 15:03:11.541008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.309 [2024-10-01 15:03:11.541157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.309 [2024-10-01 15:03:11.541406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.309 [2024-10-01 15:03:11.541407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.309 [2024-10-01 15:03:11.688216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.309 Malloc0 00:06:02.309 [2024-10-01 15:03:11.751459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3767677 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3767677 /var/tmp/bdevperf.sock 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3767677 ']' 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:02.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:06:02.309 { 00:06:02.309 "params": { 00:06:02.309 "name": "Nvme$subsystem", 00:06:02.309 "trtype": "$TEST_TRANSPORT", 00:06:02.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:02.309 "adrfam": "ipv4", 00:06:02.309 "trsvcid": "$NVMF_PORT", 00:06:02.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:02.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:02.309 "hdgst": ${hdgst:-false}, 00:06:02.309 "ddgst": ${ddgst:-false} 00:06:02.309 }, 00:06:02.309 "method": "bdev_nvme_attach_controller" 00:06:02.309 } 00:06:02.309 EOF 00:06:02.309 )") 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:06:02.309 15:03:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:06:02.309 "params": { 00:06:02.309 "name": "Nvme0", 00:06:02.309 "trtype": "tcp", 00:06:02.310 "traddr": "10.0.0.2", 00:06:02.310 "adrfam": "ipv4", 00:06:02.310 "trsvcid": "4420", 00:06:02.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:02.310 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:02.310 "hdgst": false, 00:06:02.310 "ddgst": false 00:06:02.310 }, 00:06:02.310 "method": "bdev_nvme_attach_controller" 00:06:02.310 }' 00:06:02.310 [2024-10-01 15:03:11.856965] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:06:02.310 [2024-10-01 15:03:11.857026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3767677 ] 00:06:02.310 [2024-10-01 15:03:11.917562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.310 [2024-10-01 15:03:11.983630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.310 Running I/O for 10 seconds... 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=856 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 856 -ge 100 ']' 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.882 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.882 [2024-10-01 15:03:12.714491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.882 [2024-10-01 15:03:12.714701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.883 [2024-10-01 15:03:12.714708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.883 [2024-10-01 15:03:12.714714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.883 [2024-10-01 15:03:12.714720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.883 [2024-10-01 15:03:12.714727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.883 [2024-10-01 15:03:12.714733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.883 [2024-10-01 15:03:12.714740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.883 [2024-10-01 15:03:12.714748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.883 [2024-10-01 15:03:12.714754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.883 [2024-10-01 15:03:12.714760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.883 [2024-10-01 15:03:12.714767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.883 [2024-10-01 15:03:12.714773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16132d0 is same with the state(6) to be set 00:06:02.883 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.883 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:02.883 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.883 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:02.883 [2024-10-01 15:03:12.720946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:02.883 [2024-10-01 15:03:12.720981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.720992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:02.883 [2024-10-01 15:03:12.721004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.721013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:02.883 [2024-10-01 15:03:12.721021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.721028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:02.883 [2024-10-01 15:03:12.721036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.721044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16490d0 is same with the state(6) to be set 00:06:02.883 [2024-10-01 15:03:12.730245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.883 [2024-10-01 15:03:12.730589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.883 [2024-10-01 15:03:12.730596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.730986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.730998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.731006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.731015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.731022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.731032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.731039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.731048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.731055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.731064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.731071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.731081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.731088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.731097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.731105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.731114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.731122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.731131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.884 [2024-10-01 15:03:12.731139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.884 [2024-10-01 15:03:12.731148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.885 [2024-10-01 15:03:12.731155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.885 [2024-10-01 15:03:12.731164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.885 [2024-10-01 15:03:12.731171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.885 [2024-10-01 15:03:12.731183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.885 [2024-10-01 15:03:12.731191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.885 [2024-10-01 15:03:12.731200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.885 [2024-10-01 15:03:12.731208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.885 [2024-10-01 15:03:12.731217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.885 [2024-10-01 15:03:12.731224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.885 [2024-10-01 15:03:12.731233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.885 [2024-10-01 15:03:12.731240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.885 [2024-10-01 15:03:12.731250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.885 [2024-10-01 15:03:12.731257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.885 [2024-10-01 15:03:12.731266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.885 [2024-10-01 15:03:12.731274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.885 [2024-10-01 15:03:12.731283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.885 [2024-10-01 15:03:12.731290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.885 [2024-10-01 15:03:12.731299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.885 [2024-10-01 15:03:12.731307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.885 [2024-10-01 15:03:12.731316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.885 [2024-10-01 15:03:12.731323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.885 [2024-10-01 15:03:12.731332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:02.885 [2024-10-01 15:03:12.731339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:02.885 [2024-10-01 15:03:12.731392] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1659280 was disconnected and freed. reset controller. 00:06:02.885 [2024-10-01 15:03:12.731421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16490d0 (9): Bad file descriptor 00:06:02.885 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.885 15:03:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:02.885 [2024-10-01 15:03:12.732590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:02.885 task offset: 122624 on job bdev=Nvme0n1 fails 00:06:02.885 00:06:02.885 Latency(us) 00:06:02.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:02.885 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:02.885 Job: Nvme0n1 ended in about 0.56 seconds with error 00:06:02.885 Verification LBA range: start 0x0 length 0x400 00:06:02.885 Nvme0n1 : 0.56 1697.68 106.10 113.41 0.00 34457.58 1515.52 31457.28 00:06:02.885 =================================================================================================================== 00:06:02.885 Total : 1697.68 106.10 113.41 0.00 34457.58 1515.52 31457.28 00:06:02.885 [2024-10-01 15:03:12.734567] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.145 [2024-10-01 15:03:12.755845] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:04.084 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3767677 00:06:04.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3767677) - No such process 00:06:04.084 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:04.084 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:04.084 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:04.084 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:04.084 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:06:04.084 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:06:04.084 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:06:04.084 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:06:04.084 { 00:06:04.084 "params": { 00:06:04.084 "name": "Nvme$subsystem", 00:06:04.084 "trtype": "$TEST_TRANSPORT", 00:06:04.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:04.084 "adrfam": "ipv4", 00:06:04.084 "trsvcid": "$NVMF_PORT", 00:06:04.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:04.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:04.084 "hdgst": ${hdgst:-false}, 00:06:04.084 "ddgst": ${ddgst:-false} 00:06:04.084 }, 00:06:04.084 "method": "bdev_nvme_attach_controller" 00:06:04.084 } 00:06:04.084 EOF 00:06:04.084 )") 00:06:04.084 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:06:04.084 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:06:04.084 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:06:04.084 15:03:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:06:04.084 "params": { 00:06:04.084 "name": "Nvme0", 00:06:04.084 "trtype": "tcp", 00:06:04.084 "traddr": "10.0.0.2", 00:06:04.084 "adrfam": "ipv4", 00:06:04.084 "trsvcid": "4420", 00:06:04.084 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:04.084 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:04.084 "hdgst": false, 00:06:04.084 "ddgst": false 00:06:04.084 }, 00:06:04.084 "method": "bdev_nvme_attach_controller" 00:06:04.084 }' 00:06:04.084 [2024-10-01 15:03:13.801060] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:06:04.084 [2024-10-01 15:03:13.801131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768151 ] 00:06:04.084 [2024-10-01 15:03:13.862407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.084 [2024-10-01 15:03:13.926887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.344 Running I/O for 1 seconds... 00:06:05.558 1918.00 IOPS, 119.88 MiB/s 00:06:05.558 Latency(us) 00:06:05.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:05.559 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:05.559 Verification LBA range: start 0x0 length 0x400 00:06:05.559 Nvme0n1 : 1.05 1882.21 117.64 0.00 0.00 32021.34 1843.20 42379.95 00:06:05.559 =================================================================================================================== 00:06:05.559 Total : 1882.21 117.64 0.00 0.00 32021.34 1843.20 42379.95 00:06:05.559 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:05.559 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:05.559 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:05.559 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:05.559 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:05.559 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:05.559 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:05.559 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:05.559 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:05.559 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:05.559 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:05.559 rmmod nvme_tcp 00:06:05.559 rmmod nvme_fabrics 00:06:05.559 rmmod nvme_keyring 00:06:05.559 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 3767514 ']' 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 3767514 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3767514 ']' 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3767514 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3767514 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3767514' 00:06:05.818 killing process with pid 3767514 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3767514 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3767514 00:06:05.818 [2024-10-01 15:03:15.599202] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.818 15:03:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:08.467 00:06:08.467 real 0m13.934s 00:06:08.467 user 0m20.847s 00:06:08.467 sys 0m6.579s 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.467 ************************************ 00:06:08.467 END TEST nvmf_host_management 00:06:08.467 ************************************ 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:08.467 ************************************ 00:06:08.467 START TEST nvmf_lvol 00:06:08.467 ************************************ 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:08.467 * Looking for test storage... 00:06:08.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:08.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.467 --rc genhtml_branch_coverage=1 00:06:08.467 --rc genhtml_function_coverage=1 00:06:08.467 --rc genhtml_legend=1 00:06:08.467 --rc geninfo_all_blocks=1 00:06:08.467 --rc geninfo_unexecuted_blocks=1 00:06:08.467 00:06:08.467 ' 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:08.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.467 --rc genhtml_branch_coverage=1 00:06:08.467 --rc genhtml_function_coverage=1 00:06:08.467 --rc genhtml_legend=1 00:06:08.467 --rc geninfo_all_blocks=1 00:06:08.467 --rc geninfo_unexecuted_blocks=1 00:06:08.467 00:06:08.467 ' 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:08.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.467 --rc genhtml_branch_coverage=1 00:06:08.467 --rc genhtml_function_coverage=1 00:06:08.467 --rc genhtml_legend=1 00:06:08.467 --rc geninfo_all_blocks=1 00:06:08.467 --rc geninfo_unexecuted_blocks=1 00:06:08.467 00:06:08.467 ' 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:08.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.467 --rc genhtml_branch_coverage=1 00:06:08.467 --rc genhtml_function_coverage=1 00:06:08.467 --rc genhtml_legend=1 00:06:08.467 --rc geninfo_all_blocks=1 00:06:08.467 --rc geninfo_unexecuted_blocks=1 00:06:08.467 00:06:08.467 ' 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.467 15:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.467 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:06:08.467 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:06:08.467 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.467 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.467 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:08.467 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.467 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.467 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.467 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.467 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.467 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.467 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:08.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:08.468 15:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:15.077 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:15.078 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:15.078 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:15.078 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:15.078 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # is_hw=yes 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:15.078 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:15.338 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:15.338 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:15.338 15:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:15.338 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:15.338 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:15.338 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:15.338 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:15.338 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:15.338 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:15.338 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:15.338 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:15.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:15.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:06:15.339 00:06:15.339 --- 10.0.0.2 ping statistics --- 00:06:15.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.339 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:06:15.339 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:15.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:06:15.339 00:06:15.339 --- 10.0.0.1 ping statistics --- 00:06:15.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.339 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:06:15.339 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.339 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # return 0 00:06:15.339 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:15.339 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.339 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:15.339 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:15.339 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.339 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:15.339 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:15.599 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:15.599 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:15.599 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.599 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:15.599 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=3772596 00:06:15.599 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 3772596 00:06:15.599 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3772596 ']' 00:06:15.599 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.599 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.599 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.599 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.599 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:15.599 15:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:15.599 [2024-10-01 15:03:25.308069] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:06:15.599 [2024-10-01 15:03:25.308130] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.599 [2024-10-01 15:03:25.377543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.599 [2024-10-01 15:03:25.449383] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.599 [2024-10-01 15:03:25.449424] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.599 [2024-10-01 15:03:25.449432] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.599 [2024-10-01 15:03:25.449439] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.599 [2024-10-01 15:03:25.449449] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.599 [2024-10-01 15:03:25.449586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.599 [2024-10-01 15:03:25.449709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.599 [2024-10-01 15:03:25.449712] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.541 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.541 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:16.541 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:16.541 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:16.541 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:16.541 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:16.541 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:16.541 [2024-10-01 15:03:26.279175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.541 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:16.803 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:16.803 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:17.064 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:17.064 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:17.064 15:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:17.325 15:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f6cbaacb-9aca-42a7-95f3-684d13d654f5 00:06:17.325 15:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f6cbaacb-9aca-42a7-95f3-684d13d654f5 lvol 20 00:06:17.586 15:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f5cb9057-0eb6-431a-9703-8a8efde70b89 00:06:17.586 15:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:17.847 15:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f5cb9057-0eb6-431a-9703-8a8efde70b89 00:06:17.847 15:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:18.108 [2024-10-01 15:03:27.779570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:18.108 15:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:18.368 15:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3773295 00:06:18.368 15:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:18.368 15:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:19.306 15:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f5cb9057-0eb6-431a-9703-8a8efde70b89 MY_SNAPSHOT 00:06:19.567 15:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f64e39d0-fb71-4d6e-8679-f92394d22016 00:06:19.567 15:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f5cb9057-0eb6-431a-9703-8a8efde70b89 30 00:06:19.827 15:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f64e39d0-fb71-4d6e-8679-f92394d22016 MY_CLONE 00:06:19.827 15:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c069ba57-2c1a-4d78-961f-b21c86c330ad 00:06:19.827 15:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c069ba57-2c1a-4d78-961f-b21c86c330ad 00:06:20.398 15:03:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3773295 00:06:28.539 Initializing NVMe Controllers 00:06:28.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:28.539 Controller IO queue size 128, less than required. 00:06:28.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:28.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:28.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:28.539 Initialization complete. Launching workers. 00:06:28.539 ======================================================== 00:06:28.539 Latency(us) 00:06:28.539 Device Information : IOPS MiB/s Average min max 00:06:28.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12200.70 47.66 10498.63 2106.01 54821.59 00:06:28.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16424.90 64.16 7796.04 3946.06 69436.97 00:06:28.539 ======================================================== 00:06:28.539 Total : 28625.60 111.82 8947.93 2106.01 69436.97 00:06:28.539 00:06:28.539 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:28.800 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f5cb9057-0eb6-431a-9703-8a8efde70b89 00:06:28.800 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f6cbaacb-9aca-42a7-95f3-684d13d654f5 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:29.061 rmmod nvme_tcp 00:06:29.061 rmmod nvme_fabrics 00:06:29.061 rmmod nvme_keyring 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 3772596 ']' 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 3772596 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3772596 ']' 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3772596 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3772596 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3772596' 00:06:29.061 killing process with pid 3772596 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3772596 00:06:29.061 15:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3772596 00:06:29.321 15:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:06:29.321 15:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:06:29.321 15:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:06:29.321 15:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:29.321 15:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:06:29.321 15:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:06:29.321 15:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:06:29.321 15:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:29.321 15:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:29.321 15:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.321 15:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.321 15:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.870 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:31.870 00:06:31.870 real 0m23.374s 00:06:31.870 user 1m3.636s 00:06:31.871 sys 0m8.395s 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:31.871 ************************************ 00:06:31.871 END TEST nvmf_lvol 00:06:31.871 ************************************ 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:31.871 ************************************ 00:06:31.871 START TEST nvmf_lvs_grow 00:06:31.871 ************************************ 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:31.871 * Looking for test storage... 00:06:31.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:31.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.871 --rc genhtml_branch_coverage=1 00:06:31.871 --rc genhtml_function_coverage=1 00:06:31.871 --rc genhtml_legend=1 00:06:31.871 --rc geninfo_all_blocks=1 00:06:31.871 --rc geninfo_unexecuted_blocks=1 00:06:31.871 00:06:31.871 ' 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:31.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.871 --rc genhtml_branch_coverage=1 00:06:31.871 --rc genhtml_function_coverage=1 00:06:31.871 --rc genhtml_legend=1 00:06:31.871 --rc geninfo_all_blocks=1 00:06:31.871 --rc geninfo_unexecuted_blocks=1 00:06:31.871 00:06:31.871 ' 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:31.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.871 --rc genhtml_branch_coverage=1 00:06:31.871 --rc genhtml_function_coverage=1 00:06:31.871 --rc genhtml_legend=1 00:06:31.871 --rc geninfo_all_blocks=1 00:06:31.871 --rc geninfo_unexecuted_blocks=1 00:06:31.871 00:06:31.871 ' 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:31.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.871 --rc genhtml_branch_coverage=1 00:06:31.871 --rc genhtml_function_coverage=1 00:06:31.871 --rc genhtml_legend=1 00:06:31.871 --rc geninfo_all_blocks=1 00:06:31.871 --rc geninfo_unexecuted_blocks=1 00:06:31.871 00:06:31.871 ' 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.871 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:31.872 15:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:40.004 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:40.004 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:40.005 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:40.005 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ up == up ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:40.005 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # is_hw=yes 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:40.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:40.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:06:40.005 00:06:40.005 --- 10.0.0.2 ping statistics --- 00:06:40.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.005 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:40.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:40.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:06:40.005 00:06:40.005 --- 10.0.0.1 ping statistics --- 00:06:40.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.005 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # return 0 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=3779663 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 3779663 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3779663 ']' 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.005 15:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:40.005 [2024-10-01 15:03:48.933781] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:06:40.005 [2024-10-01 15:03:48.933847] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.005 [2024-10-01 15:03:49.003871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.005 [2024-10-01 15:03:49.076756] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:40.005 [2024-10-01 15:03:49.076794] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:40.005 [2024-10-01 15:03:49.076802] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:40.005 [2024-10-01 15:03:49.076809] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:40.005 [2024-10-01 15:03:49.076815] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:40.005 [2024-10-01 15:03:49.076839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.005 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.006 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:06:40.006 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:06:40.006 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.006 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:40.006 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:40.006 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:40.266 [2024-10-01 15:03:49.917872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:40.266 ************************************ 00:06:40.266 START TEST lvs_grow_clean 00:06:40.266 ************************************ 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:40.266 15:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:40.527 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:40.527 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:40.527 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3205bbe1-da96-497f-9cee-2c27a0588152 00:06:40.527 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3205bbe1-da96-497f-9cee-2c27a0588152 00:06:40.527 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:40.786 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:40.786 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:40.787 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3205bbe1-da96-497f-9cee-2c27a0588152 lvol 150 00:06:41.047 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7d763af3-d7ed-4f08-a3a9-29cfaad9c5a4 00:06:41.047 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:41.047 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:41.047 [2024-10-01 15:03:50.865573] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:41.047 [2024-10-01 15:03:50.865622] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:41.047 true 00:06:41.047 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3205bbe1-da96-497f-9cee-2c27a0588152 00:06:41.047 15:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:41.309 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:41.309 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:41.570 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7d763af3-d7ed-4f08-a3a9-29cfaad9c5a4 00:06:41.570 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:41.831 [2024-10-01 15:03:51.535619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:41.831 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:42.092 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3780375 00:06:42.092 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:42.092 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:42.092 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3780375 /var/tmp/bdevperf.sock 00:06:42.092 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3780375 ']' 00:06:42.092 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:42.092 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.092 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:42.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:42.092 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.092 15:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:42.092 [2024-10-01 15:03:51.759889] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:06:42.092 [2024-10-01 15:03:51.759943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780375 ] 00:06:42.092 [2024-10-01 15:03:51.838120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.092 [2024-10-01 15:03:51.902234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.034 15:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.034 15:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:06:43.034 15:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:43.295 Nvme0n1 00:06:43.295 15:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:43.295 [ 00:06:43.295 { 00:06:43.295 "name": "Nvme0n1", 00:06:43.295 "aliases": [ 00:06:43.295 "7d763af3-d7ed-4f08-a3a9-29cfaad9c5a4" 00:06:43.295 ], 00:06:43.295 "product_name": "NVMe disk", 00:06:43.295 "block_size": 4096, 00:06:43.295 "num_blocks": 38912, 00:06:43.295 "uuid": "7d763af3-d7ed-4f08-a3a9-29cfaad9c5a4", 00:06:43.295 "numa_id": 0, 00:06:43.295 "assigned_rate_limits": { 00:06:43.295 "rw_ios_per_sec": 0, 00:06:43.295 "rw_mbytes_per_sec": 0, 00:06:43.295 "r_mbytes_per_sec": 0, 00:06:43.295 "w_mbytes_per_sec": 0 00:06:43.295 }, 00:06:43.295 "claimed": false, 00:06:43.295 "zoned": false, 00:06:43.295 "supported_io_types": { 00:06:43.295 "read": true, 00:06:43.295 "write": true, 00:06:43.295 "unmap": true, 00:06:43.295 "flush": true, 00:06:43.295 "reset": true, 00:06:43.295 "nvme_admin": true, 00:06:43.295 "nvme_io": true, 00:06:43.295 "nvme_io_md": false, 00:06:43.295 "write_zeroes": true, 00:06:43.295 "zcopy": false, 00:06:43.295 "get_zone_info": false, 00:06:43.295 "zone_management": false, 00:06:43.295 "zone_append": false, 00:06:43.295 "compare": true, 00:06:43.295 "compare_and_write": true, 00:06:43.295 "abort": true, 00:06:43.295 "seek_hole": false, 00:06:43.295 "seek_data": false, 00:06:43.295 "copy": true, 00:06:43.295 "nvme_iov_md": false 00:06:43.295 }, 00:06:43.295 "memory_domains": [ 00:06:43.295 { 00:06:43.295 "dma_device_id": "system", 00:06:43.295 "dma_device_type": 1 00:06:43.295 } 00:06:43.295 ], 00:06:43.295 "driver_specific": { 00:06:43.295 "nvme": [ 00:06:43.295 { 00:06:43.295 "trid": { 00:06:43.295 "trtype": "TCP", 00:06:43.295 "adrfam": "IPv4", 00:06:43.295 "traddr": "10.0.0.2", 00:06:43.295 "trsvcid": "4420", 00:06:43.295 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:43.295 }, 00:06:43.295 "ctrlr_data": { 00:06:43.295 "cntlid": 1, 00:06:43.295 "vendor_id": "0x8086", 00:06:43.295 "model_number": "SPDK bdev Controller", 00:06:43.295 "serial_number": "SPDK0", 00:06:43.295 "firmware_revision": "25.01", 00:06:43.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:43.295 "oacs": { 00:06:43.295 "security": 0, 00:06:43.295 "format": 0, 00:06:43.295 "firmware": 0, 00:06:43.295 "ns_manage": 0 00:06:43.295 }, 00:06:43.295 "multi_ctrlr": true, 00:06:43.295 "ana_reporting": false 00:06:43.295 }, 00:06:43.295 "vs": { 00:06:43.295 "nvme_version": "1.3" 00:06:43.295 }, 00:06:43.295 "ns_data": { 00:06:43.295 "id": 1, 00:06:43.295 "can_share": true 00:06:43.295 } 00:06:43.295 } 00:06:43.295 ], 00:06:43.295 "mp_policy": "active_passive" 00:06:43.295 } 00:06:43.295 } 00:06:43.295 ] 00:06:43.295 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3780598 00:06:43.296 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:43.296 15:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:43.557 Running I/O for 10 seconds... 00:06:44.500 Latency(us) 00:06:44.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:44.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:44.500 Nvme0n1 : 1.00 17904.00 69.94 0.00 0.00 0.00 0.00 0.00 00:06:44.500 =================================================================================================================== 00:06:44.500 Total : 17904.00 69.94 0.00 0.00 0.00 0.00 0.00 00:06:44.500 00:06:45.443 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3205bbe1-da96-497f-9cee-2c27a0588152 00:06:45.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:45.443 Nvme0n1 : 2.00 17966.50 70.18 0.00 0.00 0.00 0.00 0.00 00:06:45.443 =================================================================================================================== 00:06:45.443 Total : 17966.50 70.18 0.00 0.00 0.00 0.00 0.00 00:06:45.443 00:06:45.443 true 00:06:45.443 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3205bbe1-da96-497f-9cee-2c27a0588152 00:06:45.443 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:45.703 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:45.703 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:45.703 15:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3780598 00:06:46.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.644 Nvme0n1 : 3.00 18006.33 70.34 0.00 0.00 0.00 0.00 0.00 00:06:46.644 =================================================================================================================== 00:06:46.644 Total : 18006.33 70.34 0.00 0.00 0.00 0.00 0.00 00:06:46.644 00:06:47.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:47.588 Nvme0n1 : 4.00 18041.50 70.47 0.00 0.00 0.00 0.00 0.00 00:06:47.588 =================================================================================================================== 00:06:47.588 Total : 18041.50 70.47 0.00 0.00 0.00 0.00 0.00 00:06:47.588 00:06:48.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:48.529 Nvme0n1 : 5.00 18077.80 70.62 0.00 0.00 0.00 0.00 0.00 00:06:48.529 =================================================================================================================== 00:06:48.529 Total : 18077.80 70.62 0.00 0.00 0.00 0.00 0.00 00:06:48.529 00:06:49.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.470 Nvme0n1 : 6.00 18097.17 70.69 0.00 0.00 0.00 0.00 0.00 00:06:49.470 =================================================================================================================== 00:06:49.470 Total : 18097.17 70.69 0.00 0.00 0.00 0.00 0.00 00:06:49.470 00:06:50.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.413 Nvme0n1 : 7.00 18110.29 70.74 0.00 0.00 0.00 0.00 0.00 00:06:50.413 =================================================================================================================== 00:06:50.413 Total : 18110.29 70.74 0.00 0.00 0.00 0.00 0.00 00:06:50.413 00:06:51.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.366 Nvme0n1 : 8.00 18122.88 70.79 0.00 0.00 0.00 0.00 0.00 00:06:51.366 =================================================================================================================== 00:06:51.366 Total : 18122.88 70.79 0.00 0.00 0.00 0.00 0.00 00:06:51.366 00:06:52.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.387 Nvme0n1 : 9.00 18135.44 70.84 0.00 0.00 0.00 0.00 0.00 00:06:52.387 =================================================================================================================== 00:06:52.387 Total : 18135.44 70.84 0.00 0.00 0.00 0.00 0.00 00:06:52.387 00:06:53.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.795 Nvme0n1 : 10.00 18147.30 70.89 0.00 0.00 0.00 0.00 0.00 00:06:53.795 =================================================================================================================== 00:06:53.795 Total : 18147.30 70.89 0.00 0.00 0.00 0.00 0.00 00:06:53.795 00:06:53.795 00:06:53.795 Latency(us) 00:06:53.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.795 Nvme0n1 : 10.01 18151.60 70.90 0.00 0.00 7048.12 3290.45 12670.29 00:06:53.795 =================================================================================================================== 00:06:53.795 Total : 18151.60 70.90 0.00 0.00 7048.12 3290.45 12670.29 00:06:53.795 { 00:06:53.795 "results": [ 00:06:53.795 { 00:06:53.795 "job": "Nvme0n1", 00:06:53.795 "core_mask": "0x2", 00:06:53.795 "workload": "randwrite", 00:06:53.795 "status": "finished", 00:06:53.795 "queue_depth": 128, 00:06:53.795 "io_size": 4096, 00:06:53.795 "runtime": 10.008153, 00:06:53.795 "iops": 18151.60099970494, 00:06:53.795 "mibps": 70.90469140509742, 00:06:53.795 "io_failed": 0, 00:06:53.795 "io_timeout": 0, 00:06:53.795 "avg_latency_us": 7048.118766954377, 00:06:53.795 "min_latency_us": 3290.4533333333334, 00:06:53.795 "max_latency_us": 12670.293333333333 00:06:53.795 } 00:06:53.795 ], 00:06:53.795 "core_count": 1 00:06:53.795 } 00:06:53.795 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3780375 00:06:53.795 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3780375 ']' 00:06:53.795 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3780375 00:06:53.795 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:06:53.795 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.795 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3780375 00:06:53.795 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:53.795 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:53.795 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3780375' 00:06:53.795 killing process with pid 3780375 00:06:53.795 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3780375 00:06:53.795 Received shutdown signal, test time was about 10.000000 seconds 00:06:53.795 00:06:53.795 Latency(us) 00:06:53.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:53.795 =================================================================================================================== 00:06:53.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:53.795 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3780375 00:06:53.795 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:53.795 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:54.056 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3205bbe1-da96-497f-9cee-2c27a0588152 00:06:54.056 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:54.317 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:54.317 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:54.317 15:04:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:54.317 [2024-10-01 15:04:04.089550] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:54.317 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3205bbe1-da96-497f-9cee-2c27a0588152 00:06:54.317 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:06:54.317 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3205bbe1-da96-497f-9cee-2c27a0588152 00:06:54.317 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.317 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.317 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.317 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.317 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.317 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.317 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:54.317 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:54.317 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3205bbe1-da96-497f-9cee-2c27a0588152 00:06:54.578 request: 00:06:54.578 { 00:06:54.578 "uuid": "3205bbe1-da96-497f-9cee-2c27a0588152", 00:06:54.578 "method": "bdev_lvol_get_lvstores", 00:06:54.578 "req_id": 1 00:06:54.578 } 00:06:54.578 Got JSON-RPC error response 00:06:54.578 response: 00:06:54.578 { 00:06:54.578 "code": -19, 00:06:54.578 "message": "No such device" 00:06:54.578 } 00:06:54.578 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:06:54.578 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.578 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.578 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.578 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:54.840 aio_bdev 00:06:54.840 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7d763af3-d7ed-4f08-a3a9-29cfaad9c5a4 00:06:54.840 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=7d763af3-d7ed-4f08-a3a9-29cfaad9c5a4 00:06:54.840 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:54.840 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:06:54.840 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:54.840 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:54.840 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:54.840 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7d763af3-d7ed-4f08-a3a9-29cfaad9c5a4 -t 2000 00:06:55.101 [ 00:06:55.101 { 00:06:55.101 "name": "7d763af3-d7ed-4f08-a3a9-29cfaad9c5a4", 00:06:55.101 "aliases": [ 00:06:55.101 "lvs/lvol" 00:06:55.101 ], 00:06:55.101 "product_name": "Logical Volume", 00:06:55.101 "block_size": 4096, 00:06:55.101 "num_blocks": 38912, 00:06:55.101 "uuid": "7d763af3-d7ed-4f08-a3a9-29cfaad9c5a4", 00:06:55.101 "assigned_rate_limits": { 00:06:55.101 "rw_ios_per_sec": 0, 00:06:55.101 "rw_mbytes_per_sec": 0, 00:06:55.101 "r_mbytes_per_sec": 0, 00:06:55.101 "w_mbytes_per_sec": 0 00:06:55.101 }, 00:06:55.101 "claimed": false, 00:06:55.101 "zoned": false, 00:06:55.101 "supported_io_types": { 00:06:55.101 "read": true, 00:06:55.101 "write": true, 00:06:55.101 "unmap": true, 00:06:55.101 "flush": false, 00:06:55.101 "reset": true, 00:06:55.101 "nvme_admin": false, 00:06:55.101 "nvme_io": false, 00:06:55.101 "nvme_io_md": false, 00:06:55.101 "write_zeroes": true, 00:06:55.101 "zcopy": false, 00:06:55.101 "get_zone_info": false, 00:06:55.101 "zone_management": false, 00:06:55.101 "zone_append": false, 00:06:55.101 "compare": false, 00:06:55.101 "compare_and_write": false, 00:06:55.101 "abort": false, 00:06:55.101 "seek_hole": true, 00:06:55.101 "seek_data": true, 00:06:55.101 "copy": false, 00:06:55.101 "nvme_iov_md": false 00:06:55.101 }, 00:06:55.101 "driver_specific": { 00:06:55.101 "lvol": { 00:06:55.101 "lvol_store_uuid": "3205bbe1-da96-497f-9cee-2c27a0588152", 00:06:55.101 "base_bdev": "aio_bdev", 00:06:55.101 "thin_provision": false, 00:06:55.101 "num_allocated_clusters": 38, 00:06:55.101 "snapshot": false, 00:06:55.101 "clone": false, 00:06:55.101 "esnap_clone": false 00:06:55.101 } 00:06:55.101 } 00:06:55.101 } 00:06:55.101 ] 00:06:55.101 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:06:55.101 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3205bbe1-da96-497f-9cee-2c27a0588152 00:06:55.101 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:55.363 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:55.363 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3205bbe1-da96-497f-9cee-2c27a0588152 00:06:55.363 15:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:55.363 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:55.363 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7d763af3-d7ed-4f08-a3a9-29cfaad9c5a4 00:06:55.624 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3205bbe1-da96-497f-9cee-2c27a0588152 00:06:55.624 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:55.883 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:55.883 00:06:55.883 real 0m15.691s 00:06:55.883 user 0m15.447s 00:06:55.883 sys 0m1.340s 00:06:55.883 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.883 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:55.883 ************************************ 00:06:55.883 END TEST lvs_grow_clean 00:06:55.883 ************************************ 00:06:55.883 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:55.883 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:55.883 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.883 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:56.145 ************************************ 00:06:56.145 START TEST lvs_grow_dirty 00:06:56.145 ************************************ 00:06:56.145 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:06:56.145 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:56.145 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:56.145 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:56.145 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:56.145 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:56.145 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:56.145 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:56.145 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:56.145 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:56.145 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:56.145 15:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:56.406 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:06:56.406 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:06:56.406 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:56.667 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:56.667 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:56.667 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 lvol 150 00:06:56.667 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f6a9326c-cb1f-47b3-a583-8ca8b80b0906 00:06:56.667 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:56.667 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:56.928 [2024-10-01 15:04:06.633173] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:56.928 [2024-10-01 15:04:06.633225] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:56.928 true 00:06:56.928 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:56.928 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:06:57.188 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:57.188 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:57.188 15:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f6a9326c-cb1f-47b3-a583-8ca8b80b0906 00:06:57.448 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:57.448 [2024-10-01 15:04:07.307215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:57.709 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:57.709 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3783479 00:06:57.709 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:57.709 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:57.709 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3783479 /var/tmp/bdevperf.sock 00:06:57.709 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3783479 ']' 00:06:57.709 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:57.709 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.709 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:57.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:57.709 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.709 15:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:57.709 [2024-10-01 15:04:07.534117] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:06:57.709 [2024-10-01 15:04:07.534170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783479 ] 00:06:57.971 [2024-10-01 15:04:07.611852] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.971 [2024-10-01 15:04:07.666121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.542 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.542 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:06:58.542 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:58.804 Nvme0n1 00:06:58.804 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:59.064 [ 00:06:59.064 { 00:06:59.064 "name": "Nvme0n1", 00:06:59.064 "aliases": [ 00:06:59.064 "f6a9326c-cb1f-47b3-a583-8ca8b80b0906" 00:06:59.064 ], 00:06:59.064 "product_name": "NVMe disk", 00:06:59.064 "block_size": 4096, 00:06:59.064 "num_blocks": 38912, 00:06:59.064 "uuid": "f6a9326c-cb1f-47b3-a583-8ca8b80b0906", 00:06:59.064 "numa_id": 0, 00:06:59.064 "assigned_rate_limits": { 00:06:59.064 "rw_ios_per_sec": 0, 00:06:59.064 "rw_mbytes_per_sec": 0, 00:06:59.064 "r_mbytes_per_sec": 0, 00:06:59.064 "w_mbytes_per_sec": 0 00:06:59.064 }, 00:06:59.064 "claimed": false, 00:06:59.064 "zoned": false, 00:06:59.064 "supported_io_types": { 00:06:59.064 "read": true, 00:06:59.064 "write": true, 00:06:59.064 "unmap": true, 00:06:59.064 "flush": true, 00:06:59.064 "reset": true, 00:06:59.064 "nvme_admin": true, 00:06:59.064 "nvme_io": true, 00:06:59.064 "nvme_io_md": false, 00:06:59.064 "write_zeroes": true, 00:06:59.064 "zcopy": false, 00:06:59.064 "get_zone_info": false, 00:06:59.064 "zone_management": false, 00:06:59.064 "zone_append": false, 00:06:59.064 "compare": true, 00:06:59.064 "compare_and_write": true, 00:06:59.064 "abort": true, 00:06:59.064 "seek_hole": false, 00:06:59.064 "seek_data": false, 00:06:59.064 "copy": true, 00:06:59.064 "nvme_iov_md": false 00:06:59.064 }, 00:06:59.064 "memory_domains": [ 00:06:59.064 { 00:06:59.064 "dma_device_id": "system", 00:06:59.064 "dma_device_type": 1 00:06:59.064 } 00:06:59.064 ], 00:06:59.064 "driver_specific": { 00:06:59.064 "nvme": [ 00:06:59.064 { 00:06:59.064 "trid": { 00:06:59.064 "trtype": "TCP", 00:06:59.064 "adrfam": "IPv4", 00:06:59.064 "traddr": "10.0.0.2", 00:06:59.064 "trsvcid": "4420", 00:06:59.064 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:59.064 }, 00:06:59.064 "ctrlr_data": { 00:06:59.064 "cntlid": 1, 00:06:59.064 "vendor_id": "0x8086", 00:06:59.064 "model_number": "SPDK bdev Controller", 00:06:59.064 "serial_number": "SPDK0", 00:06:59.064 "firmware_revision": "25.01", 00:06:59.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:59.064 "oacs": { 00:06:59.064 "security": 0, 00:06:59.064 "format": 0, 00:06:59.064 "firmware": 0, 00:06:59.064 "ns_manage": 0 00:06:59.064 }, 00:06:59.064 "multi_ctrlr": true, 00:06:59.064 "ana_reporting": false 00:06:59.064 }, 00:06:59.064 "vs": { 00:06:59.064 "nvme_version": "1.3" 00:06:59.064 }, 00:06:59.064 "ns_data": { 00:06:59.064 "id": 1, 00:06:59.064 "can_share": true 00:06:59.064 } 00:06:59.064 } 00:06:59.064 ], 00:06:59.064 "mp_policy": "active_passive" 00:06:59.064 } 00:06:59.064 } 00:06:59.064 ] 00:06:59.064 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3783811 00:06:59.064 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:59.064 15:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:59.064 Running I/O for 10 seconds... 00:07:00.012 Latency(us) 00:07:00.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.012 Nvme0n1 : 1.00 17903.00 69.93 0.00 0.00 0.00 0.00 0.00 00:07:00.013 =================================================================================================================== 00:07:00.013 Total : 17903.00 69.93 0.00 0.00 0.00 0.00 0.00 00:07:00.013 00:07:00.956 15:04:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:07:01.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.216 Nvme0n1 : 2.00 17968.50 70.19 0.00 0.00 0.00 0.00 0.00 00:07:01.216 =================================================================================================================== 00:07:01.216 Total : 17968.50 70.19 0.00 0.00 0.00 0.00 0.00 00:07:01.216 00:07:01.216 true 00:07:01.216 15:04:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:07:01.216 15:04:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:01.477 15:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:01.477 15:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:01.477 15:04:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3783811 00:07:02.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.049 Nvme0n1 : 3.00 18031.33 70.43 0.00 0.00 0.00 0.00 0.00 00:07:02.049 =================================================================================================================== 00:07:02.049 Total : 18031.33 70.43 0.00 0.00 0.00 0.00 0.00 00:07:02.049 00:07:03.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.434 Nvme0n1 : 4.00 18065.00 70.57 0.00 0.00 0.00 0.00 0.00 00:07:03.434 =================================================================================================================== 00:07:03.434 Total : 18065.00 70.57 0.00 0.00 0.00 0.00 0.00 00:07:03.434 00:07:04.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.376 Nvme0n1 : 5.00 18085.80 70.65 0.00 0.00 0.00 0.00 0.00 00:07:04.376 =================================================================================================================== 00:07:04.376 Total : 18085.80 70.65 0.00 0.00 0.00 0.00 0.00 00:07:04.376 00:07:05.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.145 Nvme0n1 : 6.00 18098.33 70.70 0.00 0.00 0.00 0.00 0.00 00:07:05.145 =================================================================================================================== 00:07:05.145 Total : 18098.33 70.70 0.00 0.00 0.00 0.00 0.00 00:07:05.145 00:07:06.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.090 Nvme0n1 : 7.00 18125.86 70.80 0.00 0.00 0.00 0.00 0.00 00:07:06.090 =================================================================================================================== 00:07:06.090 Total : 18125.86 70.80 0.00 0.00 0.00 0.00 0.00 00:07:06.090 00:07:07.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.029 Nvme0n1 : 8.00 18139.00 70.86 0.00 0.00 0.00 0.00 0.00 00:07:07.029 =================================================================================================================== 00:07:07.029 Total : 18139.00 70.86 0.00 0.00 0.00 0.00 0.00 00:07:07.029 00:07:08.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.411 Nvme0n1 : 9.00 18148.00 70.89 0.00 0.00 0.00 0.00 0.00 00:07:08.411 =================================================================================================================== 00:07:08.411 Total : 18148.00 70.89 0.00 0.00 0.00 0.00 0.00 00:07:08.411 00:07:09.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.352 Nvme0n1 : 10.00 18162.60 70.95 0.00 0.00 0.00 0.00 0.00 00:07:09.352 =================================================================================================================== 00:07:09.352 Total : 18162.60 70.95 0.00 0.00 0.00 0.00 0.00 00:07:09.352 00:07:09.352 00:07:09.352 Latency(us) 00:07:09.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.352 Nvme0n1 : 10.01 18163.92 70.95 0.00 0.00 7043.87 4287.15 12943.36 00:07:09.352 =================================================================================================================== 00:07:09.352 Total : 18163.92 70.95 0.00 0.00 7043.87 4287.15 12943.36 00:07:09.352 { 00:07:09.352 "results": [ 00:07:09.352 { 00:07:09.352 "job": "Nvme0n1", 00:07:09.352 "core_mask": "0x2", 00:07:09.352 "workload": "randwrite", 00:07:09.352 "status": "finished", 00:07:09.352 "queue_depth": 128, 00:07:09.352 "io_size": 4096, 00:07:09.352 "runtime": 10.006322, 00:07:09.352 "iops": 18163.916771816857, 00:07:09.352 "mibps": 70.9527998899096, 00:07:09.352 "io_failed": 0, 00:07:09.352 "io_timeout": 0, 00:07:09.352 "avg_latency_us": 7043.869554966236, 00:07:09.352 "min_latency_us": 4287.1466666666665, 00:07:09.352 "max_latency_us": 12943.36 00:07:09.352 } 00:07:09.352 ], 00:07:09.352 "core_count": 1 00:07:09.352 } 00:07:09.352 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3783479 00:07:09.352 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3783479 ']' 00:07:09.352 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3783479 00:07:09.352 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:09.352 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.352 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3783479 00:07:09.352 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:09.352 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:09.352 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3783479' 00:07:09.352 killing process with pid 3783479 00:07:09.352 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3783479 00:07:09.352 Received shutdown signal, test time was about 10.000000 seconds 00:07:09.352 00:07:09.352 Latency(us) 00:07:09.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.352 =================================================================================================================== 00:07:09.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:09.352 15:04:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3783479 00:07:09.352 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:09.612 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:09.612 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:07:09.612 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:09.872 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:09.872 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:09.872 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3779663 00:07:09.872 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3779663 00:07:09.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3779663 Killed "${NVMF_APP[@]}" "$@" 00:07:09.872 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:09.872 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:09.872 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:09.872 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.872 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:09.872 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=3785852 00:07:09.872 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 3785852 00:07:09.872 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:09.872 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3785852 ']' 00:07:09.873 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.873 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.873 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.873 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.873 15:04:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:09.873 [2024-10-01 15:04:19.713556] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:07:09.873 [2024-10-01 15:04:19.713609] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.133 [2024-10-01 15:04:19.778682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.133 [2024-10-01 15:04:19.843462] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.133 [2024-10-01 15:04:19.843497] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.133 [2024-10-01 15:04:19.843505] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.133 [2024-10-01 15:04:19.843512] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.133 [2024-10-01 15:04:19.843518] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.133 [2024-10-01 15:04:19.843537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.704 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.704 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:10.704 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:10.704 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.704 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:10.704 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.704 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:10.964 [2024-10-01 15:04:20.702036] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:10.964 [2024-10-01 15:04:20.702156] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:10.964 [2024-10-01 15:04:20.702188] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:10.964 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:10.964 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f6a9326c-cb1f-47b3-a583-8ca8b80b0906 00:07:10.964 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f6a9326c-cb1f-47b3-a583-8ca8b80b0906 00:07:10.964 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:10.964 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:10.964 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:10.964 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:10.964 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:11.225 15:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f6a9326c-cb1f-47b3-a583-8ca8b80b0906 -t 2000 00:07:11.225 [ 00:07:11.225 { 00:07:11.225 "name": "f6a9326c-cb1f-47b3-a583-8ca8b80b0906", 00:07:11.225 "aliases": [ 00:07:11.225 "lvs/lvol" 00:07:11.225 ], 00:07:11.225 "product_name": "Logical Volume", 00:07:11.225 "block_size": 4096, 00:07:11.225 "num_blocks": 38912, 00:07:11.225 "uuid": "f6a9326c-cb1f-47b3-a583-8ca8b80b0906", 00:07:11.225 "assigned_rate_limits": { 00:07:11.225 "rw_ios_per_sec": 0, 00:07:11.225 "rw_mbytes_per_sec": 0, 00:07:11.225 "r_mbytes_per_sec": 0, 00:07:11.225 "w_mbytes_per_sec": 0 00:07:11.225 }, 00:07:11.225 "claimed": false, 00:07:11.225 "zoned": false, 00:07:11.225 "supported_io_types": { 00:07:11.225 "read": true, 00:07:11.225 "write": true, 00:07:11.225 "unmap": true, 00:07:11.225 "flush": false, 00:07:11.225 "reset": true, 00:07:11.225 "nvme_admin": false, 00:07:11.225 "nvme_io": false, 00:07:11.225 "nvme_io_md": false, 00:07:11.225 "write_zeroes": true, 00:07:11.225 "zcopy": false, 00:07:11.225 "get_zone_info": false, 00:07:11.225 "zone_management": false, 00:07:11.225 "zone_append": false, 00:07:11.225 "compare": false, 00:07:11.225 "compare_and_write": false, 00:07:11.225 "abort": false, 00:07:11.225 "seek_hole": true, 00:07:11.225 "seek_data": true, 00:07:11.225 "copy": false, 00:07:11.225 "nvme_iov_md": false 00:07:11.225 }, 00:07:11.225 "driver_specific": { 00:07:11.225 "lvol": { 00:07:11.225 "lvol_store_uuid": "fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1", 00:07:11.225 "base_bdev": "aio_bdev", 00:07:11.225 "thin_provision": false, 00:07:11.225 "num_allocated_clusters": 38, 00:07:11.225 "snapshot": false, 00:07:11.225 "clone": false, 00:07:11.225 "esnap_clone": false 00:07:11.225 } 00:07:11.225 } 00:07:11.225 } 00:07:11.225 ] 00:07:11.225 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:11.225 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:07:11.225 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:11.485 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:11.485 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:07:11.485 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:11.745 [2024-10-01 15:04:21.530208] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:11.745 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:07:12.006 request: 00:07:12.006 { 00:07:12.006 "uuid": "fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1", 00:07:12.006 "method": "bdev_lvol_get_lvstores", 00:07:12.006 "req_id": 1 00:07:12.006 } 00:07:12.006 Got JSON-RPC error response 00:07:12.006 response: 00:07:12.006 { 00:07:12.006 "code": -19, 00:07:12.006 "message": "No such device" 00:07:12.006 } 00:07:12.006 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:12.006 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:12.006 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:12.006 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:12.006 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:12.267 aio_bdev 00:07:12.267 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f6a9326c-cb1f-47b3-a583-8ca8b80b0906 00:07:12.267 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=f6a9326c-cb1f-47b3-a583-8ca8b80b0906 00:07:12.267 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:12.267 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:12.267 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:12.267 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:12.267 15:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:12.267 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f6a9326c-cb1f-47b3-a583-8ca8b80b0906 -t 2000 00:07:12.528 [ 00:07:12.528 { 00:07:12.528 "name": "f6a9326c-cb1f-47b3-a583-8ca8b80b0906", 00:07:12.528 "aliases": [ 00:07:12.528 "lvs/lvol" 00:07:12.528 ], 00:07:12.528 "product_name": "Logical Volume", 00:07:12.528 "block_size": 4096, 00:07:12.528 "num_blocks": 38912, 00:07:12.528 "uuid": "f6a9326c-cb1f-47b3-a583-8ca8b80b0906", 00:07:12.528 "assigned_rate_limits": { 00:07:12.528 "rw_ios_per_sec": 0, 00:07:12.528 "rw_mbytes_per_sec": 0, 00:07:12.528 "r_mbytes_per_sec": 0, 00:07:12.528 "w_mbytes_per_sec": 0 00:07:12.528 }, 00:07:12.528 "claimed": false, 00:07:12.528 "zoned": false, 00:07:12.528 "supported_io_types": { 00:07:12.528 "read": true, 00:07:12.528 "write": true, 00:07:12.528 "unmap": true, 00:07:12.528 "flush": false, 00:07:12.528 "reset": true, 00:07:12.528 "nvme_admin": false, 00:07:12.528 "nvme_io": false, 00:07:12.528 "nvme_io_md": false, 00:07:12.528 "write_zeroes": true, 00:07:12.528 "zcopy": false, 00:07:12.528 "get_zone_info": false, 00:07:12.528 "zone_management": false, 00:07:12.528 "zone_append": false, 00:07:12.528 "compare": false, 00:07:12.528 "compare_and_write": false, 00:07:12.528 "abort": false, 00:07:12.528 "seek_hole": true, 00:07:12.528 "seek_data": true, 00:07:12.528 "copy": false, 00:07:12.528 "nvme_iov_md": false 00:07:12.528 }, 00:07:12.528 "driver_specific": { 00:07:12.528 "lvol": { 00:07:12.528 "lvol_store_uuid": "fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1", 00:07:12.528 "base_bdev": "aio_bdev", 00:07:12.528 "thin_provision": false, 00:07:12.528 "num_allocated_clusters": 38, 00:07:12.528 "snapshot": false, 00:07:12.528 "clone": false, 00:07:12.528 "esnap_clone": false 00:07:12.528 } 00:07:12.528 } 00:07:12.528 } 00:07:12.528 ] 00:07:12.528 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:12.528 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:07:12.528 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:12.528 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:12.528 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:07:12.528 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:12.789 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:12.789 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f6a9326c-cb1f-47b3-a583-8ca8b80b0906 00:07:13.050 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa3fa745-f42c-4c7d-ba8c-079ab29b8eb1 00:07:13.311 15:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:13.311 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:13.311 00:07:13.311 real 0m17.402s 00:07:13.311 user 0m45.647s 00:07:13.311 sys 0m2.847s 00:07:13.311 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.311 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:13.311 ************************************ 00:07:13.311 END TEST lvs_grow_dirty 00:07:13.311 ************************************ 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:13.572 nvmf_trace.0 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:13.572 rmmod nvme_tcp 00:07:13.572 rmmod nvme_fabrics 00:07:13.572 rmmod nvme_keyring 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 3785852 ']' 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 3785852 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3785852 ']' 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3785852 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3785852 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3785852' 00:07:13.572 killing process with pid 3785852 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3785852 00:07:13.572 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3785852 00:07:13.833 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:13.833 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:13.833 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:13.833 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:13.833 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:07:13.833 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:13.833 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:07:13.833 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:13.833 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:13.833 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.833 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.833 15:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.765 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:15.765 00:07:15.765 real 0m44.374s 00:07:15.765 user 1m7.375s 00:07:15.765 sys 0m10.218s 00:07:15.765 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.765 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:15.765 ************************************ 00:07:15.765 END TEST nvmf_lvs_grow 00:07:15.765 ************************************ 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:16.025 ************************************ 00:07:16.025 START TEST nvmf_bdev_io_wait 00:07:16.025 ************************************ 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:16.025 * Looking for test storage... 00:07:16.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.025 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:16.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.287 --rc genhtml_branch_coverage=1 00:07:16.287 --rc genhtml_function_coverage=1 00:07:16.287 --rc genhtml_legend=1 00:07:16.287 --rc geninfo_all_blocks=1 00:07:16.287 --rc geninfo_unexecuted_blocks=1 00:07:16.287 00:07:16.287 ' 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:16.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.287 --rc genhtml_branch_coverage=1 00:07:16.287 --rc genhtml_function_coverage=1 00:07:16.287 --rc genhtml_legend=1 00:07:16.287 --rc geninfo_all_blocks=1 00:07:16.287 --rc geninfo_unexecuted_blocks=1 00:07:16.287 00:07:16.287 ' 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:16.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.287 --rc genhtml_branch_coverage=1 00:07:16.287 --rc genhtml_function_coverage=1 00:07:16.287 --rc genhtml_legend=1 00:07:16.287 --rc geninfo_all_blocks=1 00:07:16.287 --rc geninfo_unexecuted_blocks=1 00:07:16.287 00:07:16.287 ' 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:16.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.287 --rc genhtml_branch_coverage=1 00:07:16.287 --rc genhtml_function_coverage=1 00:07:16.287 --rc genhtml_legend=1 00:07:16.287 --rc geninfo_all_blocks=1 00:07:16.287 --rc geninfo_unexecuted_blocks=1 00:07:16.287 00:07:16.287 ' 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.287 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:16.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:16.288 15:04:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:24.457 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:24.457 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:24.457 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:24.457 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # is_hw=yes 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:24.457 15:04:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:24.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:07:24.457 00:07:24.457 --- 10.0.0.2 ping statistics --- 00:07:24.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.457 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:24.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:07:24.457 00:07:24.457 --- 10.0.0.1 ping statistics --- 00:07:24.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.457 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # return 0 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:24.457 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=3790926 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 3790926 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3790926 ']' 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.458 15:04:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:24.458 [2024-10-01 15:04:33.266613] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:07:24.458 [2024-10-01 15:04:33.266674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.458 [2024-10-01 15:04:33.340283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.458 [2024-10-01 15:04:33.406871] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.458 [2024-10-01 15:04:33.406910] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.458 [2024-10-01 15:04:33.406918] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.458 [2024-10-01 15:04:33.406924] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.458 [2024-10-01 15:04:33.406930] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.458 [2024-10-01 15:04:33.406991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.458 [2024-10-01 15:04:33.407107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.458 [2024-10-01 15:04:33.407153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.458 [2024-10-01 15:04:33.407154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:24.458 [2024-10-01 15:04:34.163579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:24.458 Malloc0 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:24.458 [2024-10-01 15:04:34.235026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3791150 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3791153 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:24.458 { 00:07:24.458 "params": { 00:07:24.458 "name": "Nvme$subsystem", 00:07:24.458 "trtype": "$TEST_TRANSPORT", 00:07:24.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:24.458 "adrfam": "ipv4", 00:07:24.458 "trsvcid": "$NVMF_PORT", 00:07:24.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:24.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:24.458 "hdgst": ${hdgst:-false}, 00:07:24.458 "ddgst": ${ddgst:-false} 00:07:24.458 }, 00:07:24.458 "method": "bdev_nvme_attach_controller" 00:07:24.458 } 00:07:24.458 EOF 00:07:24.458 )") 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3791155 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:24.458 { 00:07:24.458 "params": { 00:07:24.458 "name": "Nvme$subsystem", 00:07:24.458 "trtype": "$TEST_TRANSPORT", 00:07:24.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:24.458 "adrfam": "ipv4", 00:07:24.458 "trsvcid": "$NVMF_PORT", 00:07:24.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:24.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:24.458 "hdgst": ${hdgst:-false}, 00:07:24.458 "ddgst": ${ddgst:-false} 00:07:24.458 }, 00:07:24.458 "method": "bdev_nvme_attach_controller" 00:07:24.458 } 00:07:24.458 EOF 00:07:24.458 )") 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3791159 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:24.458 { 00:07:24.458 "params": { 00:07:24.458 "name": "Nvme$subsystem", 00:07:24.458 "trtype": "$TEST_TRANSPORT", 00:07:24.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:24.458 "adrfam": "ipv4", 00:07:24.458 "trsvcid": "$NVMF_PORT", 00:07:24.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:24.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:24.458 "hdgst": ${hdgst:-false}, 00:07:24.458 "ddgst": ${ddgst:-false} 00:07:24.458 }, 00:07:24.458 "method": "bdev_nvme_attach_controller" 00:07:24.458 } 00:07:24.458 EOF 00:07:24.458 )") 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:07:24.458 { 00:07:24.458 "params": { 00:07:24.458 "name": "Nvme$subsystem", 00:07:24.458 "trtype": "$TEST_TRANSPORT", 00:07:24.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:24.458 "adrfam": "ipv4", 00:07:24.458 "trsvcid": "$NVMF_PORT", 00:07:24.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:24.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:24.458 "hdgst": ${hdgst:-false}, 00:07:24.458 "ddgst": ${ddgst:-false} 00:07:24.458 }, 00:07:24.458 "method": "bdev_nvme_attach_controller" 00:07:24.458 } 00:07:24.458 EOF 00:07:24.458 )") 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3791150 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:24.458 "params": { 00:07:24.458 "name": "Nvme1", 00:07:24.458 "trtype": "tcp", 00:07:24.458 "traddr": "10.0.0.2", 00:07:24.458 "adrfam": "ipv4", 00:07:24.458 "trsvcid": "4420", 00:07:24.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:24.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:24.458 "hdgst": false, 00:07:24.458 "ddgst": false 00:07:24.458 }, 00:07:24.458 "method": "bdev_nvme_attach_controller" 00:07:24.458 }' 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:24.458 "params": { 00:07:24.458 "name": "Nvme1", 00:07:24.458 "trtype": "tcp", 00:07:24.458 "traddr": "10.0.0.2", 00:07:24.458 "adrfam": "ipv4", 00:07:24.458 "trsvcid": "4420", 00:07:24.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:24.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:24.458 "hdgst": false, 00:07:24.458 "ddgst": false 00:07:24.458 }, 00:07:24.458 "method": "bdev_nvme_attach_controller" 00:07:24.458 }' 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:24.458 "params": { 00:07:24.458 "name": "Nvme1", 00:07:24.458 "trtype": "tcp", 00:07:24.458 "traddr": "10.0.0.2", 00:07:24.458 "adrfam": "ipv4", 00:07:24.458 "trsvcid": "4420", 00:07:24.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:24.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:24.458 "hdgst": false, 00:07:24.458 "ddgst": false 00:07:24.458 }, 00:07:24.458 "method": "bdev_nvme_attach_controller" 00:07:24.458 }' 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:07:24.458 15:04:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:07:24.458 "params": { 00:07:24.458 "name": "Nvme1", 00:07:24.458 "trtype": "tcp", 00:07:24.458 "traddr": "10.0.0.2", 00:07:24.458 "adrfam": "ipv4", 00:07:24.458 "trsvcid": "4420", 00:07:24.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:24.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:24.458 "hdgst": false, 00:07:24.458 "ddgst": false 00:07:24.458 }, 00:07:24.458 "method": "bdev_nvme_attach_controller" 00:07:24.458 }' 00:07:24.458 [2024-10-01 15:04:34.288901] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:07:24.458 [2024-10-01 15:04:34.288956] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:24.458 [2024-10-01 15:04:34.291416] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:07:24.458 [2024-10-01 15:04:34.291468] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:24.458 [2024-10-01 15:04:34.293881] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:07:24.458 [2024-10-01 15:04:34.293929] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:24.458 [2024-10-01 15:04:34.296167] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:07:24.458 [2024-10-01 15:04:34.296214] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:24.720 [2024-10-01 15:04:34.438092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.720 [2024-10-01 15:04:34.489493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:24.720 [2024-10-01 15:04:34.493559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.720 [2024-10-01 15:04:34.542124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.720 [2024-10-01 15:04:34.545061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:07:24.981 [2024-10-01 15:04:34.590164] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.981 [2024-10-01 15:04:34.593572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:07:24.981 [2024-10-01 15:04:34.640241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:07:24.981 Running I/O for 1 seconds... 00:07:25.242 Running I/O for 1 seconds... 00:07:25.242 Running I/O for 1 seconds... 00:07:25.242 Running I/O for 1 seconds... 00:07:26.187 10106.00 IOPS, 39.48 MiB/s 00:07:26.187 Latency(us) 00:07:26.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.187 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:26.187 Nvme1n1 : 1.02 10056.30 39.28 0.00 0.00 12600.62 6799.36 25340.59 00:07:26.187 =================================================================================================================== 00:07:26.187 Total : 10056.30 39.28 0.00 0.00 12600.62 6799.36 25340.59 00:07:26.187 15:04:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3791153 00:07:26.187 187720.00 IOPS, 733.28 MiB/s 00:07:26.187 Latency(us) 00:07:26.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.187 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:26.187 Nvme1n1 : 1.00 187350.11 731.84 0.00 0.00 679.71 305.49 1979.73 00:07:26.187 =================================================================================================================== 00:07:26.187 Total : 187350.11 731.84 0.00 0.00 679.71 305.49 1979.73 00:07:26.187 18444.00 IOPS, 72.05 MiB/s 00:07:26.187 Latency(us) 00:07:26.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.187 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:26.187 Nvme1n1 : 1.01 18480.52 72.19 0.00 0.00 6908.92 3235.84 13762.56 00:07:26.187 =================================================================================================================== 00:07:26.187 Total : 18480.52 72.19 0.00 0.00 6908.92 3235.84 13762.56 00:07:26.187 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3791155 00:07:26.187 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3791159 00:07:26.449 10669.00 IOPS, 41.68 MiB/s 00:07:26.449 Latency(us) 00:07:26.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.449 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:26.449 Nvme1n1 : 1.01 10779.09 42.11 0.00 0.00 11844.67 3577.17 37137.07 00:07:26.449 =================================================================================================================== 00:07:26.449 Total : 10779.09 42.11 0.00 0.00 11844.67 3577.17 37137.07 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:26.449 rmmod nvme_tcp 00:07:26.449 rmmod nvme_fabrics 00:07:26.449 rmmod nvme_keyring 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 3790926 ']' 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 3790926 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3790926 ']' 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3790926 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.449 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3790926 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3790926' 00:07:26.711 killing process with pid 3790926 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3790926 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3790926 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.711 15:04:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:29.260 00:07:29.260 real 0m12.881s 00:07:29.260 user 0m19.827s 00:07:29.260 sys 0m7.013s 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:29.260 ************************************ 00:07:29.260 END TEST nvmf_bdev_io_wait 00:07:29.260 ************************************ 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.260 ************************************ 00:07:29.260 START TEST nvmf_queue_depth 00:07:29.260 ************************************ 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:29.260 * Looking for test storage... 00:07:29.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:29.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.260 --rc genhtml_branch_coverage=1 00:07:29.260 --rc genhtml_function_coverage=1 00:07:29.260 --rc genhtml_legend=1 00:07:29.260 --rc geninfo_all_blocks=1 00:07:29.260 --rc geninfo_unexecuted_blocks=1 00:07:29.260 00:07:29.260 ' 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:29.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.260 --rc genhtml_branch_coverage=1 00:07:29.260 --rc genhtml_function_coverage=1 00:07:29.260 --rc genhtml_legend=1 00:07:29.260 --rc geninfo_all_blocks=1 00:07:29.260 --rc geninfo_unexecuted_blocks=1 00:07:29.260 00:07:29.260 ' 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:29.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.260 --rc genhtml_branch_coverage=1 00:07:29.260 --rc genhtml_function_coverage=1 00:07:29.260 --rc genhtml_legend=1 00:07:29.260 --rc geninfo_all_blocks=1 00:07:29.260 --rc geninfo_unexecuted_blocks=1 00:07:29.260 00:07:29.260 ' 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:29.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.260 --rc genhtml_branch_coverage=1 00:07:29.260 --rc genhtml_function_coverage=1 00:07:29.260 --rc genhtml_legend=1 00:07:29.260 --rc geninfo_all_blocks=1 00:07:29.260 --rc geninfo_unexecuted_blocks=1 00:07:29.260 00:07:29.260 ' 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.260 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.261 15:04:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.846 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.846 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:35.846 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:35.846 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:35.847 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:35.847 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:35.847 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:35.847 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # is_hw=yes 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.847 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:36.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:07:36.108 00:07:36.108 --- 10.0.0.2 ping statistics --- 00:07:36.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.108 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:07:36.108 00:07:36.108 --- 10.0.0.1 ping statistics --- 00:07:36.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.108 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # return 0 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:36.108 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.369 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=3795655 00:07:36.369 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 3795655 00:07:36.369 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:36.369 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3795655 ']' 00:07:36.369 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.369 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.369 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.369 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.369 15:04:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.369 [2024-10-01 15:04:46.024466] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:07:36.369 [2024-10-01 15:04:46.024520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.369 [2024-10-01 15:04:46.113780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.369 [2024-10-01 15:04:46.204958] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.369 [2024-10-01 15:04:46.205024] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.369 [2024-10-01 15:04:46.205034] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.369 [2024-10-01 15:04:46.205041] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.369 [2024-10-01 15:04:46.205047] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.369 [2024-10-01 15:04:46.205080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.311 [2024-10-01 15:04:46.884690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.311 Malloc0 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.311 [2024-10-01 15:04:46.956939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3796003 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3796003 /var/tmp/bdevperf.sock 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3796003 ']' 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:37.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.311 15:04:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:37.311 [2024-10-01 15:04:47.015302] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:07:37.311 [2024-10-01 15:04:47.015371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3796003 ] 00:07:37.311 [2024-10-01 15:04:47.080810] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.311 [2024-10-01 15:04:47.155124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.251 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.251 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:38.251 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:38.251 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.251 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.251 NVMe0n1 00:07:38.251 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.251 15:04:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:38.251 Running I/O for 10 seconds... 00:07:48.647 11264.00 IOPS, 44.00 MiB/s 11328.50 IOPS, 44.25 MiB/s 11567.00 IOPS, 45.18 MiB/s 11558.75 IOPS, 45.15 MiB/s 11658.20 IOPS, 45.54 MiB/s 11632.50 IOPS, 45.44 MiB/s 11697.00 IOPS, 45.69 MiB/s 11707.50 IOPS, 45.73 MiB/s 11720.89 IOPS, 45.78 MiB/s 11772.80 IOPS, 45.99 MiB/s 00:07:48.647 Latency(us) 00:07:48.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.647 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:48.647 Verification LBA range: start 0x0 length 0x4000 00:07:48.647 NVMe0n1 : 10.07 11785.26 46.04 0.00 0.00 86597.10 24139.09 64225.28 00:07:48.647 =================================================================================================================== 00:07:48.647 Total : 11785.26 46.04 0.00 0.00 86597.10 24139.09 64225.28 00:07:48.647 { 00:07:48.647 "results": [ 00:07:48.647 { 00:07:48.647 "job": "NVMe0n1", 00:07:48.647 "core_mask": "0x1", 00:07:48.647 "workload": "verify", 00:07:48.647 "status": "finished", 00:07:48.647 "verify_range": { 00:07:48.647 "start": 0, 00:07:48.647 "length": 16384 00:07:48.647 }, 00:07:48.647 "queue_depth": 1024, 00:07:48.647 "io_size": 4096, 00:07:48.647 "runtime": 10.071143, 00:07:48.647 "iops": 11785.256152156711, 00:07:48.647 "mibps": 46.036156844362154, 00:07:48.647 "io_failed": 0, 00:07:48.647 "io_timeout": 0, 00:07:48.647 "avg_latency_us": 86597.0963392338, 00:07:48.647 "min_latency_us": 24139.093333333334, 00:07:48.647 "max_latency_us": 64225.28 00:07:48.647 } 00:07:48.647 ], 00:07:48.647 "core_count": 1 00:07:48.647 } 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3796003 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3796003 ']' 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3796003 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3796003 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3796003' 00:07:48.647 killing process with pid 3796003 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3796003 00:07:48.647 Received shutdown signal, test time was about 10.000000 seconds 00:07:48.647 00:07:48.647 Latency(us) 00:07:48.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.647 =================================================================================================================== 00:07:48.647 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3796003 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.647 rmmod nvme_tcp 00:07:48.647 rmmod nvme_fabrics 00:07:48.647 rmmod nvme_keyring 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 3795655 ']' 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 3795655 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3795655 ']' 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3795655 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.647 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3795655 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3795655' 00:07:48.907 killing process with pid 3795655 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3795655 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3795655 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.907 15:04:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.449 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:51.449 00:07:51.449 real 0m22.120s 00:07:51.449 user 0m25.642s 00:07:51.449 sys 0m6.653s 00:07:51.449 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.449 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.449 ************************************ 00:07:51.449 END TEST nvmf_queue_depth 00:07:51.449 ************************************ 00:07:51.449 15:05:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:51.449 15:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:51.449 15:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.449 15:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.449 ************************************ 00:07:51.449 START TEST nvmf_target_multipath 00:07:51.449 ************************************ 00:07:51.449 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:51.449 * Looking for test storage... 00:07:51.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.449 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:51.449 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:07:51.449 15:05:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:51.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.450 --rc genhtml_branch_coverage=1 00:07:51.450 --rc genhtml_function_coverage=1 00:07:51.450 --rc genhtml_legend=1 00:07:51.450 --rc geninfo_all_blocks=1 00:07:51.450 --rc geninfo_unexecuted_blocks=1 00:07:51.450 00:07:51.450 ' 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:51.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.450 --rc genhtml_branch_coverage=1 00:07:51.450 --rc genhtml_function_coverage=1 00:07:51.450 --rc genhtml_legend=1 00:07:51.450 --rc geninfo_all_blocks=1 00:07:51.450 --rc geninfo_unexecuted_blocks=1 00:07:51.450 00:07:51.450 ' 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:51.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.450 --rc genhtml_branch_coverage=1 00:07:51.450 --rc genhtml_function_coverage=1 00:07:51.450 --rc genhtml_legend=1 00:07:51.450 --rc geninfo_all_blocks=1 00:07:51.450 --rc geninfo_unexecuted_blocks=1 00:07:51.450 00:07:51.450 ' 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:51.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.450 --rc genhtml_branch_coverage=1 00:07:51.450 --rc genhtml_function_coverage=1 00:07:51.450 --rc genhtml_legend=1 00:07:51.450 --rc geninfo_all_blocks=1 00:07:51.450 --rc geninfo_unexecuted_blocks=1 00:07:51.450 00:07:51.450 ' 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:51.450 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.451 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:51.451 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:07:51.451 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.451 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:07:51.451 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:07:51.451 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:07:51.451 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.451 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.451 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.451 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:07:51.451 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:07:51.451 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.451 15:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:59.596 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:07:59.596 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:59.596 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:59.597 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ up == up ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:59.597 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # is_hw=yes 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:59.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:07:59.597 00:07:59.597 --- 10.0.0.2 ping statistics --- 00:07:59.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.597 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:07:59.597 00:07:59.597 --- 10.0.0.1 ping statistics --- 00:07:59.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.597 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # return 0 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:59.597 only one NIC for nvmf test 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.597 rmmod nvme_tcp 00:07:59.597 rmmod nvme_fabrics 00:07:59.597 rmmod nvme_keyring 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.597 15:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:00.983 00:08:00.983 real 0m9.883s 00:08:00.983 user 0m2.037s 00:08:00.983 sys 0m5.793s 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:00.983 ************************************ 00:08:00.983 END TEST nvmf_target_multipath 00:08:00.983 ************************************ 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.983 ************************************ 00:08:00.983 START TEST nvmf_zcopy 00:08:00.983 ************************************ 00:08:00.983 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:01.245 * Looking for test storage... 00:08:01.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:01.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.246 --rc genhtml_branch_coverage=1 00:08:01.246 --rc genhtml_function_coverage=1 00:08:01.246 --rc genhtml_legend=1 00:08:01.246 --rc geninfo_all_blocks=1 00:08:01.246 --rc geninfo_unexecuted_blocks=1 00:08:01.246 00:08:01.246 ' 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:01.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.246 --rc genhtml_branch_coverage=1 00:08:01.246 --rc genhtml_function_coverage=1 00:08:01.246 --rc genhtml_legend=1 00:08:01.246 --rc geninfo_all_blocks=1 00:08:01.246 --rc geninfo_unexecuted_blocks=1 00:08:01.246 00:08:01.246 ' 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:01.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.246 --rc genhtml_branch_coverage=1 00:08:01.246 --rc genhtml_function_coverage=1 00:08:01.246 --rc genhtml_legend=1 00:08:01.246 --rc geninfo_all_blocks=1 00:08:01.246 --rc geninfo_unexecuted_blocks=1 00:08:01.246 00:08:01.246 ' 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:01.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.246 --rc genhtml_branch_coverage=1 00:08:01.246 --rc genhtml_function_coverage=1 00:08:01.246 --rc genhtml_legend=1 00:08:01.246 --rc geninfo_all_blocks=1 00:08:01.246 --rc geninfo_unexecuted_blocks=1 00:08:01.246 00:08:01.246 ' 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.246 15:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:01.246 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:01.247 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:01.247 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.247 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.247 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.247 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:01.247 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:01.247 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.247 15:05:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:09.388 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:09.388 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:09.388 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:09.388 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # is_hw=yes 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.388 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:09.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:08:09.389 00:08:09.389 --- 10.0.0.2 ping statistics --- 00:08:09.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.389 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:08:09.389 00:08:09.389 --- 10.0.0.1 ping statistics --- 00:08:09.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.389 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # return 0 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=3806700 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 3806700 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3806700 ']' 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.389 15:05:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.389 [2024-10-01 15:05:18.646216] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:08:09.389 [2024-10-01 15:05:18.646283] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.389 [2024-10-01 15:05:18.733820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.389 [2024-10-01 15:05:18.826373] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.389 [2024-10-01 15:05:18.826434] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.389 [2024-10-01 15:05:18.826442] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.389 [2024-10-01 15:05:18.826449] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.389 [2024-10-01 15:05:18.826456] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.389 [2024-10-01 15:05:18.826488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.649 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.649 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:09.649 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:09.649 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.649 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.649 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.649 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:09.649 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:09.649 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.649 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.649 [2024-10-01 15:05:19.505441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.910 [2024-10-01 15:05:19.529693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.910 malloc0 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:09.910 { 00:08:09.910 "params": { 00:08:09.910 "name": "Nvme$subsystem", 00:08:09.910 "trtype": "$TEST_TRANSPORT", 00:08:09.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:09.910 "adrfam": "ipv4", 00:08:09.910 "trsvcid": "$NVMF_PORT", 00:08:09.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:09.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:09.910 "hdgst": ${hdgst:-false}, 00:08:09.910 "ddgst": ${ddgst:-false} 00:08:09.910 }, 00:08:09.910 "method": "bdev_nvme_attach_controller" 00:08:09.910 } 00:08:09.910 EOF 00:08:09.910 )") 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:08:09.910 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:08:09.911 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:08:09.911 15:05:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:09.911 "params": { 00:08:09.911 "name": "Nvme1", 00:08:09.911 "trtype": "tcp", 00:08:09.911 "traddr": "10.0.0.2", 00:08:09.911 "adrfam": "ipv4", 00:08:09.911 "trsvcid": "4420", 00:08:09.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:09.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:09.911 "hdgst": false, 00:08:09.911 "ddgst": false 00:08:09.911 }, 00:08:09.911 "method": "bdev_nvme_attach_controller" 00:08:09.911 }' 00:08:09.911 [2024-10-01 15:05:19.643418] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:08:09.911 [2024-10-01 15:05:19.643471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807007 ] 00:08:09.911 [2024-10-01 15:05:19.705288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.171 [2024-10-01 15:05:19.773498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.431 Running I/O for 10 seconds... 00:08:20.294 6833.00 IOPS, 53.38 MiB/s 8314.50 IOPS, 64.96 MiB/s 8815.67 IOPS, 68.87 MiB/s 9063.75 IOPS, 70.81 MiB/s 9214.60 IOPS, 71.99 MiB/s 9315.00 IOPS, 72.77 MiB/s 9388.00 IOPS, 73.34 MiB/s 9441.12 IOPS, 73.76 MiB/s 9485.11 IOPS, 74.10 MiB/s 9516.70 IOPS, 74.35 MiB/s 00:08:20.294 Latency(us) 00:08:20.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.294 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:20.294 Verification LBA range: start 0x0 length 0x1000 00:08:20.294 Nvme1n1 : 10.01 9517.97 74.36 0.00 0.00 13397.43 1720.32 26432.85 00:08:20.294 =================================================================================================================== 00:08:20.294 Total : 9517.97 74.36 0.00 0.00 13397.43 1720.32 26432.85 00:08:20.556 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3809065 00:08:20.556 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:20.556 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.556 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:20.556 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:20.556 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:08:20.556 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:08:20.556 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:20.556 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:20.556 { 00:08:20.556 "params": { 00:08:20.556 "name": "Nvme$subsystem", 00:08:20.556 "trtype": "$TEST_TRANSPORT", 00:08:20.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:20.556 "adrfam": "ipv4", 00:08:20.556 "trsvcid": "$NVMF_PORT", 00:08:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:20.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:20.556 "hdgst": ${hdgst:-false}, 00:08:20.556 "ddgst": ${ddgst:-false} 00:08:20.556 }, 00:08:20.556 "method": "bdev_nvme_attach_controller" 00:08:20.556 } 00:08:20.556 EOF 00:08:20.556 )") 00:08:20.556 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:08:20.556 [2024-10-01 15:05:30.259469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.259497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.556 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:08:20.556 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:08:20.556 15:05:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:20.556 "params": { 00:08:20.556 "name": "Nvme1", 00:08:20.556 "trtype": "tcp", 00:08:20.556 "traddr": "10.0.0.2", 00:08:20.556 "adrfam": "ipv4", 00:08:20.556 "trsvcid": "4420", 00:08:20.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:20.556 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:20.556 "hdgst": false, 00:08:20.556 "ddgst": false 00:08:20.556 }, 00:08:20.556 "method": "bdev_nvme_attach_controller" 00:08:20.556 }' 00:08:20.556 [2024-10-01 15:05:30.271469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.271478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.556 [2024-10-01 15:05:30.283496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.283503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.556 [2024-10-01 15:05:30.295526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.295533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.556 [2024-10-01 15:05:30.307557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.307564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.556 [2024-10-01 15:05:30.313871] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:08:20.556 [2024-10-01 15:05:30.313918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3809065 ] 00:08:20.556 [2024-10-01 15:05:30.319588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.319595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.556 [2024-10-01 15:05:30.331619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.331626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.556 [2024-10-01 15:05:30.343650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.343657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.556 [2024-10-01 15:05:30.355681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.355688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.556 [2024-10-01 15:05:30.367713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.367720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.556 [2024-10-01 15:05:30.372753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.556 [2024-10-01 15:05:30.379745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.379754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.556 [2024-10-01 15:05:30.391776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.391785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.556 [2024-10-01 15:05:30.403807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.403818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.556 [2024-10-01 15:05:30.415838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.556 [2024-10-01 15:05:30.415850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.817 [2024-10-01 15:05:30.427869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.817 [2024-10-01 15:05:30.427877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.817 [2024-10-01 15:05:30.437209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.817 [2024-10-01 15:05:30.439900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.817 [2024-10-01 15:05:30.439907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.817 [2024-10-01 15:05:30.451934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.817 [2024-10-01 15:05:30.451946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.817 [2024-10-01 15:05:30.463966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.817 [2024-10-01 15:05:30.463978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.475992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.476005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.488025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.488035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.500051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.500057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.512080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.512087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.524120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.524136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.536144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.536154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.548176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.548187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.560206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.560217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.572235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.572242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.584279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.584293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 Running I/O for 5 seconds... 00:08:20.818 [2024-10-01 15:05:30.598853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.598869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.612305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.612322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.625548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.625564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.638927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.638943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.652524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.652540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.818 [2024-10-01 15:05:30.665248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.818 [2024-10-01 15:05:30.665263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.679095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.679110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.691905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.691920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.704654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.704670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.717905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.717920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.731205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.731219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.744507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.744522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.757615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.757630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.771096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.771111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.784010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.784026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.796296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.796311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.808511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.808525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.821823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.821837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.835169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.835184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.848216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.848230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.861426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.861441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.875023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.875038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.888565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.888580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.902055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.902069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.914928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.914943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.079 [2024-10-01 15:05:30.928042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.079 [2024-10-01 15:05:30.928057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:30.941250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:30.941269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:30.954946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:30.954961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:30.967488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:30.967503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:30.980834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:30.980850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:30.994097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:30.994112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.006728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.006743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.019547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.019562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.032307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.032322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.045508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.045523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.058236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.058251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.071504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.071518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.084597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.084612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.097221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.097236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.110280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.110295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.123664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.123679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.137294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.137309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.150255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.150269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.163241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.163256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.176828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.176842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.341 [2024-10-01 15:05:31.189994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.341 [2024-10-01 15:05:31.190016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.602 [2024-10-01 15:05:31.203233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.602 [2024-10-01 15:05:31.203248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.602 [2024-10-01 15:05:31.216676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.602 [2024-10-01 15:05:31.216690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.602 [2024-10-01 15:05:31.229488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.602 [2024-10-01 15:05:31.229502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.242611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.242626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.256093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.256108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.268617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.268632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.281469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.281484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.294634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.294649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.308269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.308283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.321687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.321701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.334590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.334605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.347762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.347777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.361187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.361202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.374425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.374440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.388022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.388038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.400794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.400810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.413784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.413799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.426873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.426888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.440192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.440214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.603 [2024-10-01 15:05:31.453541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.603 [2024-10-01 15:05:31.453556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.466627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.466642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.479951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.479966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.493674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.493690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.506625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.506640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.519831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.519846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.532892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.532907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.546537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.546551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.559691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.559706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.572589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.572604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.586047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.586062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 19146.00 IOPS, 149.58 MiB/s [2024-10-01 15:05:31.598631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.598646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.611390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.611405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.624459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.624473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.637160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.637176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.649745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.649760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.662645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.662660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.675817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.675831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.688245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.688259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.701372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.701387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.865 [2024-10-01 15:05:31.713786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.865 [2024-10-01 15:05:31.713800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.726873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.726887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.739844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.739858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.752417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.752431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.765843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.765858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.779125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.779140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.791345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.791359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.805123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.805137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.817746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.817760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.830280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.830295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.843883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.843898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.857290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.857305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.870620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.870634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.884012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.884026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.897297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.897311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.910938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.910953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.923453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.923467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.936842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.936857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.949985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.950004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.963230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.963245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.126 [2024-10-01 15:05:31.976838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.126 [2024-10-01 15:05:31.976852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.387 [2024-10-01 15:05:31.989790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.387 [2024-10-01 15:05:31.989805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.387 [2024-10-01 15:05:32.003142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.387 [2024-10-01 15:05:32.003156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.387 [2024-10-01 15:05:32.015902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.387 [2024-10-01 15:05:32.015917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.387 [2024-10-01 15:05:32.029299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.387 [2024-10-01 15:05:32.029314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.387 [2024-10-01 15:05:32.042088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.387 [2024-10-01 15:05:32.042102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.387 [2024-10-01 15:05:32.054652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.387 [2024-10-01 15:05:32.054667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.387 [2024-10-01 15:05:32.068444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.387 [2024-10-01 15:05:32.068458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.387 [2024-10-01 15:05:32.081639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.387 [2024-10-01 15:05:32.081653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.387 [2024-10-01 15:05:32.095007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.387 [2024-10-01 15:05:32.095021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.388 [2024-10-01 15:05:32.108138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.388 [2024-10-01 15:05:32.108153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.388 [2024-10-01 15:05:32.120742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.388 [2024-10-01 15:05:32.120756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.388 [2024-10-01 15:05:32.133944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.388 [2024-10-01 15:05:32.133958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.388 [2024-10-01 15:05:32.146493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.388 [2024-10-01 15:05:32.146508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.388 [2024-10-01 15:05:32.159372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.388 [2024-10-01 15:05:32.159387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.388 [2024-10-01 15:05:32.172303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.388 [2024-10-01 15:05:32.172318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.388 [2024-10-01 15:05:32.185850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.388 [2024-10-01 15:05:32.185865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.388 [2024-10-01 15:05:32.198447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.388 [2024-10-01 15:05:32.198462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.388 [2024-10-01 15:05:32.211582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.388 [2024-10-01 15:05:32.211596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.388 [2024-10-01 15:05:32.224157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.388 [2024-10-01 15:05:32.224171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.388 [2024-10-01 15:05:32.237496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.388 [2024-10-01 15:05:32.237510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.250169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.250184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.263569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.263584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.276417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.276431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.289695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.289709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.303056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.303070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.316482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.316497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.329525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.329540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.342944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.342958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.356174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.356189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.369546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.369560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.382896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.382911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.395914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.395929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.409086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.409100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.422221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.422239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.435699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.435714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.448993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.449010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.461689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.461704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.475193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.475208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.488056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.488071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.649 [2024-10-01 15:05:32.500569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.649 [2024-10-01 15:05:32.500583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.514201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.514216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.527648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.527663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.540813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.540828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.553623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.553638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.567047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.567062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.580040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.580055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.592915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.592929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 19232.50 IOPS, 150.25 MiB/s [2024-10-01 15:05:32.605516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.605531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.618891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.618905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.631910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.631924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.644939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.644953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.658215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.658230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.671275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.671294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.684263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.684278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.697841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.697856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.710786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.710801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.724055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.724070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.737051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.737066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.749560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.749575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.911 [2024-10-01 15:05:32.762673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.911 [2024-10-01 15:05:32.762688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.775808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.775823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.788711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.788725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.802228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.802243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.815375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.815389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.828437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.828452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.841473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.841488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.854395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.854410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.867311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.867326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.880731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.880746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.893623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.893637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.906011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.906026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.919498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.919516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.931944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.931959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.944895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.944910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.957585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.957599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.971174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.971189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.984455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.984470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:32.997346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:32.997360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:33.010562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:33.010577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.172 [2024-10-01 15:05:33.023922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.172 [2024-10-01 15:05:33.023937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.432 [2024-10-01 15:05:33.037631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.432 [2024-10-01 15:05:33.037646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.432 [2024-10-01 15:05:33.051054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.432 [2024-10-01 15:05:33.051069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.432 [2024-10-01 15:05:33.064310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.064325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.077672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.077687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.090965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.090980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.104054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.104069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.117297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.117312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.130726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.130741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.144160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.144175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.157689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.157704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.170262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.170281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.183657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.183672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.197120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.197135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.210584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.210599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.223127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.223142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.236105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.236120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.248439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.248454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.261712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.261727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.274127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.274142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.433 [2024-10-01 15:05:33.287520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.433 [2024-10-01 15:05:33.287534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.299932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.299947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.313159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.313173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.325735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.325750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.338202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.338216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.351445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.351459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.364957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.364971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.378212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.378227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.391287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.391301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.404890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.404904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.417595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.417610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.430643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.430658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.444370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.444385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.457843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.457858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.470142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.470157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.483188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.694 [2024-10-01 15:05:33.483203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.694 [2024-10-01 15:05:33.496548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.695 [2024-10-01 15:05:33.496562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.695 [2024-10-01 15:05:33.509450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.695 [2024-10-01 15:05:33.509464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.695 [2024-10-01 15:05:33.522501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.695 [2024-10-01 15:05:33.522515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.695 [2024-10-01 15:05:33.535752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.695 [2024-10-01 15:05:33.535766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.695 [2024-10-01 15:05:33.549342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.695 [2024-10-01 15:05:33.549358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.561946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.561961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.574113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.574128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.586892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.586906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 19262.33 IOPS, 150.49 MiB/s [2024-10-01 15:05:33.599824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.599838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.612782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.612796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.626168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.626183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.639768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.639782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.652910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.652924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.665682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.665697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.679219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.679233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.691928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.691943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.705526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.705541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.718131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.718145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.731430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.731444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.744780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.744795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.758182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.758196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.771681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.771695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.784512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.784526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.797626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.797641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.957 [2024-10-01 15:05:33.810488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.957 [2024-10-01 15:05:33.810503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.823809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.823824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.836792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.836806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.849463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.849478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.862771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.862786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.876212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.876226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.888950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.888965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.901700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.901718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.914543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.914557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.928112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.928126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.940920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.940934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.953988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.954007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.967374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.967388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.980466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.980480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:33.993945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:33.993960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:34.007237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:34.007251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:34.020520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:34.020535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:34.033371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:34.033386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:34.046895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:34.046910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:34.059745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:34.059759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.217 [2024-10-01 15:05:34.072897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.217 [2024-10-01 15:05:34.072912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.477 [2024-10-01 15:05:34.086428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.477 [2024-10-01 15:05:34.086443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.477 [2024-10-01 15:05:34.099199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.477 [2024-10-01 15:05:34.099213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.477 [2024-10-01 15:05:34.112083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.477 [2024-10-01 15:05:34.112097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.477 [2024-10-01 15:05:34.125600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.477 [2024-10-01 15:05:34.125614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.477 [2024-10-01 15:05:34.138431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.477 [2024-10-01 15:05:34.138445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.477 [2024-10-01 15:05:34.150635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.477 [2024-10-01 15:05:34.150653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.477 [2024-10-01 15:05:34.164211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.477 [2024-10-01 15:05:34.164226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.477 [2024-10-01 15:05:34.177039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.477 [2024-10-01 15:05:34.177053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.477 [2024-10-01 15:05:34.190577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.477 [2024-10-01 15:05:34.190592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.477 [2024-10-01 15:05:34.203515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.477 [2024-10-01 15:05:34.203529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.477 [2024-10-01 15:05:34.216781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.477 [2024-10-01 15:05:34.216795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.477 [2024-10-01 15:05:34.230376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.478 [2024-10-01 15:05:34.230390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.478 [2024-10-01 15:05:34.243296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.478 [2024-10-01 15:05:34.243310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.478 [2024-10-01 15:05:34.256495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.478 [2024-10-01 15:05:34.256510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.478 [2024-10-01 15:05:34.269707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.478 [2024-10-01 15:05:34.269721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.478 [2024-10-01 15:05:34.282500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.478 [2024-10-01 15:05:34.282515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.478 [2024-10-01 15:05:34.296362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.478 [2024-10-01 15:05:34.296377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.478 [2024-10-01 15:05:34.309733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.478 [2024-10-01 15:05:34.309748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.478 [2024-10-01 15:05:34.322222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.478 [2024-10-01 15:05:34.322236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.478 [2024-10-01 15:05:34.334229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.478 [2024-10-01 15:05:34.334244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.737 [2024-10-01 15:05:34.347526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.737 [2024-10-01 15:05:34.347541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.737 [2024-10-01 15:05:34.361061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.361076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.373432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.373448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.386119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.386134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.399411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.399429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.412159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.412174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.425247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.425262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.438541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.438556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.451927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.451942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.465262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.465278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.478840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.478855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.492011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.492026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.505628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.505643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.518943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.518958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.531881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.531895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.545338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.545352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.558622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.558637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.572179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.572194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.584972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.584987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.738 [2024-10-01 15:05:34.597660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.738 [2024-10-01 15:05:34.597675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.997 19294.50 IOPS, 150.74 MiB/s [2024-10-01 15:05:34.611047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.997 [2024-10-01 15:05:34.611062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.997 [2024-10-01 15:05:34.624260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.997 [2024-10-01 15:05:34.624275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.997 [2024-10-01 15:05:34.637795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.997 [2024-10-01 15:05:34.637810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.997 [2024-10-01 15:05:34.650658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.997 [2024-10-01 15:05:34.650673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.997 [2024-10-01 15:05:34.663979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.663994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.677245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.677259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.690467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.690482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.703158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.703173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.715798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.715813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.729123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.729137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.741422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.741437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.754731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.754745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.768041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.768056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.781480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.781495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.794253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.794269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.806627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.806642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.820303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.820318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.833337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.833351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.998 [2024-10-01 15:05:34.846168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.998 [2024-10-01 15:05:34.846183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.257 [2024-10-01 15:05:34.859662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.257 [2024-10-01 15:05:34.859677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:34.873091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:34.873106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:34.885781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:34.885796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:34.898644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:34.898660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:34.911446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:34.911461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:34.924830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:34.924845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:34.938148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:34.938162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:34.950590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:34.950605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:34.963540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:34.963554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:34.977473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:34.977489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:34.989873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:34.989888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:35.003268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:35.003283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:35.016028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:35.016043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:35.029403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:35.029418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:35.041967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:35.041982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:35.054262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:35.054277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:35.067617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:35.067632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:35.080275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:35.080290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:35.093141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:35.093156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.258 [2024-10-01 15:05:35.106475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.258 [2024-10-01 15:05:35.106489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.518 [2024-10-01 15:05:35.119568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.119582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.132940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.132955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.146098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.146113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.159041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.159062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.172249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.172263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.185102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.185117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.197639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.197653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.210324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.210338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.223262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.223276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.236501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.236515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.249594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.249608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.262785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.262799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.275269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.275283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.288056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.288071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.301311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.301326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.314242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.314256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.327472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.327486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.340036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.340050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.353583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.353597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.519 [2024-10-01 15:05:35.366108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.519 [2024-10-01 15:05:35.366122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.379392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.379407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.392547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.392562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.405853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.405867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.419051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.419065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.432108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.432122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.445243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.445258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.457955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.457969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.471301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.471316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.484796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.484810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.498359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.498373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.512217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.512232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.524822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.524836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.536858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.536873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.550301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.550315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.562874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.562888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.576028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.779 [2024-10-01 15:05:35.576043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.779 [2024-10-01 15:05:35.589134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.780 [2024-10-01 15:05:35.589147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.780 [2024-10-01 15:05:35.601616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.780 [2024-10-01 15:05:35.601630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.780 19318.20 IOPS, 150.92 MiB/s 00:08:25.780 Latency(us) 00:08:25.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.780 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:25.780 Nvme1n1 : 5.01 19321.51 150.95 0.00 0.00 6618.30 2771.63 15291.73 00:08:25.780 =================================================================================================================== 00:08:25.780 Total : 19321.51 150.95 0.00 0.00 6618.30 2771.63 15291.73 00:08:25.780 [2024-10-01 15:05:35.611152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.780 [2024-10-01 15:05:35.611166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.780 [2024-10-01 15:05:35.623179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.780 [2024-10-01 15:05:35.623191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.780 [2024-10-01 15:05:35.635215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.780 [2024-10-01 15:05:35.635227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.040 [2024-10-01 15:05:35.647243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.040 [2024-10-01 15:05:35.647256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.040 [2024-10-01 15:05:35.659271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.040 [2024-10-01 15:05:35.659281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.040 [2024-10-01 15:05:35.671300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.040 [2024-10-01 15:05:35.671308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.040 [2024-10-01 15:05:35.683330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.040 [2024-10-01 15:05:35.683338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.040 [2024-10-01 15:05:35.695362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.040 [2024-10-01 15:05:35.695371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.040 [2024-10-01 15:05:35.707394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.040 [2024-10-01 15:05:35.707405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.040 [2024-10-01 15:05:35.719424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.040 [2024-10-01 15:05:35.719434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.040 [2024-10-01 15:05:35.731454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.040 [2024-10-01 15:05:35.731463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.040 [2024-10-01 15:05:35.743484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.040 [2024-10-01 15:05:35.743491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3809065) - No such process 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3809065 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.040 delay0 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.040 15:05:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:26.040 [2024-10-01 15:05:35.883632] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:34.179 Initializing NVMe Controllers 00:08:34.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:34.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:34.179 Initialization complete. Launching workers. 00:08:34.179 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 246, failed: 31961 00:08:34.179 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32096, failed to submit 111 00:08:34.179 success 31998, unsuccessful 98, failed 0 00:08:34.179 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:34.179 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:34.179 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:34.179 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:34.179 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.179 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:34.179 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.179 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.180 rmmod nvme_tcp 00:08:34.180 rmmod nvme_fabrics 00:08:34.180 rmmod nvme_keyring 00:08:34.180 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.180 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:34.180 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:34.180 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 3806700 ']' 00:08:34.180 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 3806700 00:08:34.180 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3806700 ']' 00:08:34.180 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3806700 00:08:34.180 15:05:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3806700 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3806700' 00:08:34.180 killing process with pid 3806700 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3806700 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3806700 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.180 15:05:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.564 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:35.564 00:08:35.564 real 0m34.481s 00:08:35.564 user 0m45.630s 00:08:35.564 sys 0m11.502s 00:08:35.564 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.564 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:35.564 ************************************ 00:08:35.564 END TEST nvmf_zcopy 00:08:35.564 ************************************ 00:08:35.564 15:05:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:35.564 15:05:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:35.564 15:05:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.564 15:05:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:35.564 ************************************ 00:08:35.564 START TEST nvmf_nmic 00:08:35.564 ************************************ 00:08:35.564 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:35.826 * Looking for test storage... 00:08:35.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:35.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.826 --rc genhtml_branch_coverage=1 00:08:35.826 --rc genhtml_function_coverage=1 00:08:35.826 --rc genhtml_legend=1 00:08:35.826 --rc geninfo_all_blocks=1 00:08:35.826 --rc geninfo_unexecuted_blocks=1 00:08:35.826 00:08:35.826 ' 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:35.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.826 --rc genhtml_branch_coverage=1 00:08:35.826 --rc genhtml_function_coverage=1 00:08:35.826 --rc genhtml_legend=1 00:08:35.826 --rc geninfo_all_blocks=1 00:08:35.826 --rc geninfo_unexecuted_blocks=1 00:08:35.826 00:08:35.826 ' 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:35.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.826 --rc genhtml_branch_coverage=1 00:08:35.826 --rc genhtml_function_coverage=1 00:08:35.826 --rc genhtml_legend=1 00:08:35.826 --rc geninfo_all_blocks=1 00:08:35.826 --rc geninfo_unexecuted_blocks=1 00:08:35.826 00:08:35.826 ' 00:08:35.826 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:35.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.826 --rc genhtml_branch_coverage=1 00:08:35.826 --rc genhtml_function_coverage=1 00:08:35.826 --rc genhtml_legend=1 00:08:35.826 --rc geninfo_all_blocks=1 00:08:35.826 --rc geninfo_unexecuted_blocks=1 00:08:35.826 00:08:35.826 ' 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:35.827 15:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:43.973 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:43.973 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:43.973 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ up == up ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:43.973 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # is_hw=yes 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:43.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:08:43.973 00:08:43.973 --- 10.0.0.2 ping statistics --- 00:08:43.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.973 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:08:43.973 00:08:43.973 --- 10.0.0.1 ping statistics --- 00:08:43.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.973 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # return 0 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:43.973 15:05:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:43.973 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:43.973 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:43.973 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:43.974 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.974 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=3815749 00:08:43.974 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 3815749 00:08:43.974 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.974 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3815749 ']' 00:08:43.974 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.974 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.974 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.974 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.974 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:43.974 [2024-10-01 15:05:53.088189] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:08:43.974 [2024-10-01 15:05:53.088258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.974 [2024-10-01 15:05:53.159426] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.974 [2024-10-01 15:05:53.236074] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.974 [2024-10-01 15:05:53.236113] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.974 [2024-10-01 15:05:53.236121] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.974 [2024-10-01 15:05:53.236128] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.974 [2024-10-01 15:05:53.236134] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.974 [2024-10-01 15:05:53.236281] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.974 [2024-10-01 15:05:53.236397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.974 [2024-10-01 15:05:53.236530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.974 [2024-10-01 15:05:53.236531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.240 [2024-10-01 15:05:53.947066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.240 Malloc0 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.240 15:05:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.240 [2024-10-01 15:05:54.006309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:44.240 test case1: single bdev can't be used in multiple subsystems 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.240 [2024-10-01 15:05:54.042238] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:44.240 [2024-10-01 15:05:54.042257] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:44.240 [2024-10-01 15:05:54.042265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.240 request: 00:08:44.240 { 00:08:44.240 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:44.240 "namespace": { 00:08:44.240 "bdev_name": "Malloc0", 00:08:44.240 "no_auto_visible": false 00:08:44.240 }, 00:08:44.240 "method": "nvmf_subsystem_add_ns", 00:08:44.240 "req_id": 1 00:08:44.240 } 00:08:44.240 Got JSON-RPC error response 00:08:44.240 response: 00:08:44.240 { 00:08:44.240 "code": -32602, 00:08:44.240 "message": "Invalid parameters" 00:08:44.240 } 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:44.240 Adding namespace failed - expected result. 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:44.240 test case2: host connect to nvmf target in multiple paths 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.240 [2024-10-01 15:05:54.054375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.240 15:05:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:45.755 15:05:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:47.671 15:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:47.671 15:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:08:47.671 15:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:47.671 15:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:47.671 15:05:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:08:49.583 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:49.583 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:49.583 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.583 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:49.583 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.583 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:08:49.583 15:05:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:49.583 [global] 00:08:49.583 thread=1 00:08:49.583 invalidate=1 00:08:49.583 rw=write 00:08:49.583 time_based=1 00:08:49.583 runtime=1 00:08:49.583 ioengine=libaio 00:08:49.583 direct=1 00:08:49.583 bs=4096 00:08:49.583 iodepth=1 00:08:49.583 norandommap=0 00:08:49.583 numjobs=1 00:08:49.583 00:08:49.583 verify_dump=1 00:08:49.583 verify_backlog=512 00:08:49.583 verify_state_save=0 00:08:49.583 do_verify=1 00:08:49.583 verify=crc32c-intel 00:08:49.583 [job0] 00:08:49.583 filename=/dev/nvme0n1 00:08:49.583 Could not set queue depth (nvme0n1) 00:08:49.583 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.583 fio-3.35 00:08:49.583 Starting 1 thread 00:08:50.967 00:08:50.967 job0: (groupid=0, jobs=1): err= 0: pid=3817310: Tue Oct 1 15:06:00 2024 00:08:50.967 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:08:50.967 slat (nsec): min=7092, max=60010, avg=25353.21, stdev=3212.42 00:08:50.967 clat (usec): min=530, max=2287, avg=953.31, stdev=127.66 00:08:50.967 lat (usec): min=555, max=2312, avg=978.66, stdev=127.74 00:08:50.967 clat percentiles (usec): 00:08:50.967 | 1.00th=[ 603], 5.00th=[ 742], 10.00th=[ 799], 20.00th=[ 857], 00:08:50.967 | 30.00th=[ 914], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 996], 00:08:50.967 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1090], 00:08:50.967 | 99.00th=[ 1172], 99.50th=[ 1287], 99.90th=[ 2278], 99.95th=[ 2278], 00:08:50.967 | 99.99th=[ 2278] 00:08:50.967 write: IOPS=846, BW=3385KiB/s (3466kB/s)(3388KiB/1001msec); 0 zone resets 00:08:50.967 slat (nsec): min=9674, max=65994, avg=29427.21, stdev=8873.48 00:08:50.967 clat (usec): min=149, max=781, avg=548.19, stdev=110.10 00:08:50.967 lat (usec): min=160, max=828, avg=577.61, stdev=112.83 00:08:50.967 clat percentiles (usec): 00:08:50.967 | 1.00th=[ 243], 5.00th=[ 330], 10.00th=[ 400], 20.00th=[ 453], 00:08:50.967 | 30.00th=[ 494], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 586], 00:08:50.967 | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 685], 95.00th=[ 701], 00:08:50.967 | 99.00th=[ 758], 99.50th=[ 766], 99.90th=[ 783], 99.95th=[ 783], 00:08:50.967 | 99.99th=[ 783] 00:08:50.967 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:50.967 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:50.967 lat (usec) : 250=0.66%, 500=18.69%, 750=44.30%, 1000=21.34% 00:08:50.967 lat (msec) : 2=14.94%, 4=0.07% 00:08:50.967 cpu : usr=2.00%, sys=3.90%, ctx=1359, majf=0, minf=1 00:08:50.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.967 issued rwts: total=512,847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.967 00:08:50.967 Run status group 0 (all jobs): 00:08:50.967 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:08:50.967 WRITE: bw=3385KiB/s (3466kB/s), 3385KiB/s-3385KiB/s (3466kB/s-3466kB/s), io=3388KiB (3469kB), run=1001-1001msec 00:08:50.967 00:08:50.967 Disk stats (read/write): 00:08:50.967 nvme0n1: ios=562/653, merge=0/0, ticks=620/354, in_queue=974, util=97.60% 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.967 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.967 rmmod nvme_tcp 00:08:50.967 rmmod nvme_fabrics 00:08:50.967 rmmod nvme_keyring 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 3815749 ']' 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 3815749 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3815749 ']' 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3815749 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3815749 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3815749' 00:08:51.227 killing process with pid 3815749 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3815749 00:08:51.227 15:06:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3815749 00:08:51.227 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:51.227 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:51.227 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:51.227 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:51.227 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:08:51.227 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:51.227 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:08:51.227 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.227 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.227 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.227 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.227 15:06:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.769 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.769 00:08:53.769 real 0m17.797s 00:08:53.769 user 0m47.139s 00:08:53.769 sys 0m6.606s 00:08:53.769 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.769 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:53.769 ************************************ 00:08:53.769 END TEST nvmf_nmic 00:08:53.769 ************************************ 00:08:53.769 15:06:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.770 ************************************ 00:08:53.770 START TEST nvmf_fio_target 00:08:53.770 ************************************ 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:53.770 * Looking for test storage... 00:08:53.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:53.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.770 --rc genhtml_branch_coverage=1 00:08:53.770 --rc genhtml_function_coverage=1 00:08:53.770 --rc genhtml_legend=1 00:08:53.770 --rc geninfo_all_blocks=1 00:08:53.770 --rc geninfo_unexecuted_blocks=1 00:08:53.770 00:08:53.770 ' 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:53.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.770 --rc genhtml_branch_coverage=1 00:08:53.770 --rc genhtml_function_coverage=1 00:08:53.770 --rc genhtml_legend=1 00:08:53.770 --rc geninfo_all_blocks=1 00:08:53.770 --rc geninfo_unexecuted_blocks=1 00:08:53.770 00:08:53.770 ' 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:53.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.770 --rc genhtml_branch_coverage=1 00:08:53.770 --rc genhtml_function_coverage=1 00:08:53.770 --rc genhtml_legend=1 00:08:53.770 --rc geninfo_all_blocks=1 00:08:53.770 --rc geninfo_unexecuted_blocks=1 00:08:53.770 00:08:53.770 ' 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:53.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.770 --rc genhtml_branch_coverage=1 00:08:53.770 --rc genhtml_function_coverage=1 00:08:53.770 --rc genhtml_legend=1 00:08:53.770 --rc geninfo_all_blocks=1 00:08:53.770 --rc geninfo_unexecuted_blocks=1 00:08:53.770 00:08:53.770 ' 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.770 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.771 15:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:01.911 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.911 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:01.911 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:01.911 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:01.911 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:01.911 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:01.912 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:01.912 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:01.912 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:01.912 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # is_hw=yes 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:01.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:09:01.912 00:09:01.912 --- 10.0.0.2 ping statistics --- 00:09:01.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.912 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:09:01.912 00:09:01.912 --- 10.0.0.1 ping statistics --- 00:09:01.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.912 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # return 0 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:01.912 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:01.913 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:01.913 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:01.913 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:01.913 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:01.913 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=3822367 00:09:01.913 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 3822367 00:09:01.913 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:01.913 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3822367 ']' 00:09:01.913 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.913 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.913 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.913 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.913 15:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:01.913 [2024-10-01 15:06:11.043389] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:09:01.913 [2024-10-01 15:06:11.043441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.913 [2024-10-01 15:06:11.109347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.913 [2024-10-01 15:06:11.174993] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.913 [2024-10-01 15:06:11.175036] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.913 [2024-10-01 15:06:11.175045] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.913 [2024-10-01 15:06:11.175052] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.913 [2024-10-01 15:06:11.175057] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.913 [2024-10-01 15:06:11.175117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.913 [2024-10-01 15:06:11.175262] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.913 [2024-10-01 15:06:11.175418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.913 [2024-10-01 15:06:11.175418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.173 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.173 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:02.173 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:02.173 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:02.173 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.173 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.173 15:06:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:02.433 [2024-10-01 15:06:12.037498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.433 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.433 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:02.433 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.694 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:02.694 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.954 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:02.954 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.214 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:03.214 15:06:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:03.214 15:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.475 15:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:03.475 15:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.735 15:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:03.735 15:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.995 15:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:03.995 15:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:03.995 15:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.256 15:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:04.256 15:06:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:04.516 15:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:04.516 15:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:04.516 15:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.777 [2024-10-01 15:06:14.484069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.777 15:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:05.037 15:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:05.037 15:06:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:06.947 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:06.947 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:06.947 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.947 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:06.947 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:06.947 15:06:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:08.858 15:06:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:08.858 15:06:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:08.858 15:06:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.858 15:06:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:08.858 15:06:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.858 15:06:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:08.858 15:06:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:08.858 [global] 00:09:08.858 thread=1 00:09:08.858 invalidate=1 00:09:08.858 rw=write 00:09:08.858 time_based=1 00:09:08.858 runtime=1 00:09:08.858 ioengine=libaio 00:09:08.858 direct=1 00:09:08.858 bs=4096 00:09:08.858 iodepth=1 00:09:08.858 norandommap=0 00:09:08.858 numjobs=1 00:09:08.858 00:09:08.858 verify_dump=1 00:09:08.858 verify_backlog=512 00:09:08.858 verify_state_save=0 00:09:08.858 do_verify=1 00:09:08.858 verify=crc32c-intel 00:09:08.858 [job0] 00:09:08.858 filename=/dev/nvme0n1 00:09:08.858 [job1] 00:09:08.858 filename=/dev/nvme0n2 00:09:08.858 [job2] 00:09:08.858 filename=/dev/nvme0n3 00:09:08.858 [job3] 00:09:08.858 filename=/dev/nvme0n4 00:09:08.858 Could not set queue depth (nvme0n1) 00:09:08.858 Could not set queue depth (nvme0n2) 00:09:08.858 Could not set queue depth (nvme0n3) 00:09:08.858 Could not set queue depth (nvme0n4) 00:09:09.120 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.120 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.120 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.120 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.120 fio-3.35 00:09:09.120 Starting 4 threads 00:09:10.508 00:09:10.508 job0: (groupid=0, jobs=1): err= 0: pid=3824146: Tue Oct 1 15:06:20 2024 00:09:10.508 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:10.508 slat (nsec): min=25608, max=58948, avg=26627.09, stdev=3205.48 00:09:10.508 clat (usec): min=676, max=1360, avg=1103.50, stdev=106.37 00:09:10.508 lat (usec): min=703, max=1386, avg=1130.12, stdev=106.12 00:09:10.508 clat percentiles (usec): 00:09:10.508 | 1.00th=[ 791], 5.00th=[ 914], 10.00th=[ 971], 20.00th=[ 1020], 00:09:10.508 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1139], 00:09:10.508 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1237], 95.00th=[ 1254], 00:09:10.508 | 99.00th=[ 1319], 99.50th=[ 1336], 99.90th=[ 1369], 99.95th=[ 1369], 00:09:10.508 | 99.99th=[ 1369] 00:09:10.508 write: IOPS=641, BW=2565KiB/s (2627kB/s)(2568KiB/1001msec); 0 zone resets 00:09:10.508 slat (nsec): min=9199, max=52619, avg=30740.72, stdev=8508.42 00:09:10.508 clat (usec): min=236, max=1217, avg=611.84, stdev=122.02 00:09:10.508 lat (usec): min=267, max=1249, avg=642.58, stdev=124.83 00:09:10.508 clat percentiles (usec): 00:09:10.508 | 1.00th=[ 338], 5.00th=[ 388], 10.00th=[ 465], 20.00th=[ 510], 00:09:10.508 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:09:10.508 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 799], 00:09:10.508 | 99.00th=[ 865], 99.50th=[ 906], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:10.508 | 99.99th=[ 1221] 00:09:10.508 bw ( KiB/s): min= 4096, max= 4096, per=35.14%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.508 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.508 lat (usec) : 250=0.09%, 500=9.97%, 750=38.73%, 1000=12.74% 00:09:10.508 lat (msec) : 2=38.47% 00:09:10.508 cpu : usr=2.50%, sys=4.50%, ctx=1154, majf=0, minf=1 00:09:10.508 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.508 issued rwts: total=512,642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.508 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.508 job1: (groupid=0, jobs=1): err= 0: pid=3824147: Tue Oct 1 15:06:20 2024 00:09:10.508 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:10.508 slat (nsec): min=9021, max=60632, avg=26872.62, stdev=2973.60 00:09:10.508 clat (usec): min=643, max=1217, avg=986.73, stdev=79.22 00:09:10.508 lat (usec): min=655, max=1244, avg=1013.60, stdev=79.64 00:09:10.508 clat percentiles (usec): 00:09:10.508 | 1.00th=[ 775], 5.00th=[ 824], 10.00th=[ 873], 20.00th=[ 938], 00:09:10.508 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:09:10.508 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1090], 00:09:10.508 | 99.00th=[ 1139], 99.50th=[ 1205], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:10.508 | 99.99th=[ 1221] 00:09:10.508 write: IOPS=751, BW=3005KiB/s (3077kB/s)(3008KiB/1001msec); 0 zone resets 00:09:10.508 slat (nsec): min=9404, max=59239, avg=29622.82, stdev=10825.84 00:09:10.508 clat (usec): min=168, max=1215, avg=597.48, stdev=122.38 00:09:10.508 lat (usec): min=181, max=1250, avg=627.11, stdev=127.93 00:09:10.508 clat percentiles (usec): 00:09:10.508 | 1.00th=[ 293], 5.00th=[ 383], 10.00th=[ 437], 20.00th=[ 490], 00:09:10.509 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 635], 00:09:10.509 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 775], 00:09:10.509 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:10.509 | 99.99th=[ 1221] 00:09:10.509 bw ( KiB/s): min= 4096, max= 4096, per=35.14%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.509 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.509 lat (usec) : 250=0.24%, 500=13.05%, 750=41.06%, 1000=25.47% 00:09:10.509 lat (msec) : 2=20.17% 00:09:10.509 cpu : usr=3.30%, sys=4.00%, ctx=1267, majf=0, minf=1 00:09:10.509 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.509 issued rwts: total=512,752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.509 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.509 job2: (groupid=0, jobs=1): err= 0: pid=3824148: Tue Oct 1 15:06:20 2024 00:09:10.509 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:10.509 slat (nsec): min=6565, max=59626, avg=24237.45, stdev=6976.57 00:09:10.509 clat (usec): min=468, max=1340, avg=976.37, stdev=103.12 00:09:10.509 lat (usec): min=475, max=1366, avg=1000.61, stdev=105.66 00:09:10.509 clat percentiles (usec): 00:09:10.509 | 1.00th=[ 693], 5.00th=[ 783], 10.00th=[ 824], 20.00th=[ 898], 00:09:10.509 | 30.00th=[ 938], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1020], 00:09:10.509 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:09:10.509 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1336], 99.95th=[ 1336], 00:09:10.509 | 99.99th=[ 1336] 00:09:10.509 write: IOPS=819, BW=3277KiB/s (3355kB/s)(3280KiB/1001msec); 0 zone resets 00:09:10.509 slat (nsec): min=9337, max=52443, avg=30060.82, stdev=8945.36 00:09:10.509 clat (usec): min=169, max=914, avg=552.97, stdev=130.41 00:09:10.509 lat (usec): min=179, max=947, avg=583.03, stdev=133.47 00:09:10.509 clat percentiles (usec): 00:09:10.509 | 1.00th=[ 255], 5.00th=[ 326], 10.00th=[ 388], 20.00th=[ 441], 00:09:10.509 | 30.00th=[ 490], 40.00th=[ 515], 50.00th=[ 562], 60.00th=[ 594], 00:09:10.509 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 725], 95.00th=[ 758], 00:09:10.509 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 914], 99.95th=[ 914], 00:09:10.509 | 99.99th=[ 914] 00:09:10.509 bw ( KiB/s): min= 4096, max= 4096, per=35.14%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.509 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.509 lat (usec) : 250=0.60%, 500=20.87%, 750=37.46%, 1000=22.75% 00:09:10.509 lat (msec) : 2=18.32% 00:09:10.509 cpu : usr=2.10%, sys=4.80%, ctx=1332, majf=0, minf=1 00:09:10.509 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.509 issued rwts: total=512,820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.509 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.509 job3: (groupid=0, jobs=1): err= 0: pid=3824150: Tue Oct 1 15:06:20 2024 00:09:10.509 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:10.509 slat (nsec): min=7287, max=43775, avg=25745.92, stdev=2945.56 00:09:10.509 clat (usec): min=611, max=1594, avg=1006.68, stdev=109.93 00:09:10.509 lat (usec): min=644, max=1619, avg=1032.42, stdev=109.72 00:09:10.509 clat percentiles (usec): 00:09:10.509 | 1.00th=[ 758], 5.00th=[ 807], 10.00th=[ 857], 20.00th=[ 930], 00:09:10.509 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1037], 00:09:10.509 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1172], 00:09:10.509 | 99.00th=[ 1221], 99.50th=[ 1270], 99.90th=[ 1598], 99.95th=[ 1598], 00:09:10.509 | 99.99th=[ 1598] 00:09:10.509 write: IOPS=702, BW=2809KiB/s (2877kB/s)(2812KiB/1001msec); 0 zone resets 00:09:10.509 slat (nsec): min=9404, max=52219, avg=28498.04, stdev=9626.24 00:09:10.509 clat (usec): min=257, max=1361, avg=629.20, stdev=129.81 00:09:10.509 lat (usec): min=267, max=1393, avg=657.70, stdev=134.28 00:09:10.509 clat percentiles (usec): 00:09:10.509 | 1.00th=[ 318], 5.00th=[ 392], 10.00th=[ 469], 20.00th=[ 529], 00:09:10.509 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 676], 00:09:10.509 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 807], 00:09:10.509 | 99.00th=[ 930], 99.50th=[ 979], 99.90th=[ 1369], 99.95th=[ 1369], 00:09:10.509 | 99.99th=[ 1369] 00:09:10.509 bw ( KiB/s): min= 4096, max= 4096, per=35.14%, avg=4096.00, stdev= 0.00, samples=1 00:09:10.509 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:10.509 lat (usec) : 500=8.64%, 750=41.40%, 1000=26.50% 00:09:10.509 lat (msec) : 2=23.46% 00:09:10.509 cpu : usr=1.90%, sys=3.70%, ctx=1215, majf=0, minf=1 00:09:10.509 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.509 issued rwts: total=512,703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.509 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.509 00:09:10.509 Run status group 0 (all jobs): 00:09:10.509 READ: bw=8184KiB/s (8380kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:09:10.509 WRITE: bw=11.4MiB/s (11.9MB/s), 2565KiB/s-3277KiB/s (2627kB/s-3355kB/s), io=11.4MiB (11.9MB), run=1001-1001msec 00:09:10.509 00:09:10.509 Disk stats (read/write): 00:09:10.509 nvme0n1: ios=490/512, merge=0/0, ticks=761/258, in_queue=1019, util=94.99% 00:09:10.509 nvme0n2: ios=518/512, merge=0/0, ticks=1432/247, in_queue=1679, util=96.94% 00:09:10.509 nvme0n3: ios=512/552, merge=0/0, ticks=470/250, in_queue=720, util=88.36% 00:09:10.509 nvme0n4: ios=473/512, merge=0/0, ticks=457/278, in_queue=735, util=89.40% 00:09:10.509 15:06:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:10.509 [global] 00:09:10.509 thread=1 00:09:10.509 invalidate=1 00:09:10.509 rw=randwrite 00:09:10.509 time_based=1 00:09:10.509 runtime=1 00:09:10.509 ioengine=libaio 00:09:10.509 direct=1 00:09:10.509 bs=4096 00:09:10.509 iodepth=1 00:09:10.509 norandommap=0 00:09:10.509 numjobs=1 00:09:10.509 00:09:10.509 verify_dump=1 00:09:10.509 verify_backlog=512 00:09:10.509 verify_state_save=0 00:09:10.509 do_verify=1 00:09:10.509 verify=crc32c-intel 00:09:10.509 [job0] 00:09:10.509 filename=/dev/nvme0n1 00:09:10.509 [job1] 00:09:10.509 filename=/dev/nvme0n2 00:09:10.509 [job2] 00:09:10.509 filename=/dev/nvme0n3 00:09:10.509 [job3] 00:09:10.509 filename=/dev/nvme0n4 00:09:10.509 Could not set queue depth (nvme0n1) 00:09:10.509 Could not set queue depth (nvme0n2) 00:09:10.509 Could not set queue depth (nvme0n3) 00:09:10.509 Could not set queue depth (nvme0n4) 00:09:10.769 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.770 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.770 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.770 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.770 fio-3.35 00:09:10.770 Starting 4 threads 00:09:12.159 00:09:12.159 job0: (groupid=0, jobs=1): err= 0: pid=3824676: Tue Oct 1 15:06:21 2024 00:09:12.159 read: IOPS=16, BW=66.1KiB/s (67.7kB/s)(68.0KiB/1028msec) 00:09:12.159 slat (nsec): min=25013, max=25582, avg=25209.35, stdev=173.91 00:09:12.159 clat (usec): min=40825, max=42123, avg=41673.59, stdev=459.95 00:09:12.159 lat (usec): min=40850, max=42148, avg=41698.80, stdev=459.92 00:09:12.159 clat percentiles (usec): 00:09:12.159 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:12.159 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:12.159 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:12.159 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:12.159 | 99.99th=[42206] 00:09:12.159 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:09:12.159 slat (nsec): min=9200, max=51638, avg=28391.34, stdev=8662.18 00:09:12.159 clat (usec): min=132, max=1064, avg=587.21, stdev=139.21 00:09:12.159 lat (usec): min=142, max=1096, avg=615.60, stdev=141.65 00:09:12.159 clat percentiles (usec): 00:09:12.159 | 1.00th=[ 223], 5.00th=[ 343], 10.00th=[ 404], 20.00th=[ 474], 00:09:12.159 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:09:12.159 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 807], 00:09:12.159 | 99.00th=[ 881], 99.50th=[ 938], 99.90th=[ 1057], 99.95th=[ 1057], 00:09:12.159 | 99.99th=[ 1057] 00:09:12.159 bw ( KiB/s): min= 4096, max= 4096, per=42.22%, avg=4096.00, stdev= 0.00, samples=1 00:09:12.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:12.159 lat (usec) : 250=1.70%, 500=23.06%, 750=62.19%, 1000=9.45% 00:09:12.159 lat (msec) : 2=0.38%, 50=3.21% 00:09:12.159 cpu : usr=0.78%, sys=1.36%, ctx=529, majf=0, minf=1 00:09:12.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.159 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.159 job1: (groupid=0, jobs=1): err= 0: pid=3824677: Tue Oct 1 15:06:21 2024 00:09:12.159 read: IOPS=19, BW=79.6KiB/s (81.5kB/s)(80.0KiB/1005msec) 00:09:12.159 slat (nsec): min=26047, max=27060, avg=26524.45, stdev=294.88 00:09:12.159 clat (usec): min=649, max=41015, avg=38937.85, stdev=9012.15 00:09:12.159 lat (usec): min=676, max=41042, avg=38964.37, stdev=9012.06 00:09:12.159 clat percentiles (usec): 00:09:12.159 | 1.00th=[ 652], 5.00th=[ 652], 10.00th=[40633], 20.00th=[41157], 00:09:12.159 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:12.159 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:12.159 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:12.159 | 99.99th=[41157] 00:09:12.159 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:09:12.159 slat (nsec): min=8766, max=51444, avg=29176.59, stdev=9140.06 00:09:12.159 clat (usec): min=135, max=789, avg=403.60, stdev=120.45 00:09:12.159 lat (usec): min=144, max=821, avg=432.78, stdev=122.15 00:09:12.159 clat percentiles (usec): 00:09:12.159 | 1.00th=[ 190], 5.00th=[ 241], 10.00th=[ 277], 20.00th=[ 297], 00:09:12.159 | 30.00th=[ 318], 40.00th=[ 343], 50.00th=[ 388], 60.00th=[ 433], 00:09:12.159 | 70.00th=[ 457], 80.00th=[ 515], 90.00th=[ 578], 95.00th=[ 627], 00:09:12.159 | 99.00th=[ 693], 99.50th=[ 693], 99.90th=[ 791], 99.95th=[ 791], 00:09:12.159 | 99.99th=[ 791] 00:09:12.159 bw ( KiB/s): min= 4096, max= 4096, per=42.22%, avg=4096.00, stdev= 0.00, samples=1 00:09:12.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:12.159 lat (usec) : 250=5.26%, 500=70.49%, 750=20.49%, 1000=0.19% 00:09:12.159 lat (msec) : 50=3.57% 00:09:12.159 cpu : usr=1.59%, sys=1.39%, ctx=532, majf=0, minf=1 00:09:12.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.159 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.159 job2: (groupid=0, jobs=1): err= 0: pid=3824678: Tue Oct 1 15:06:21 2024 00:09:12.159 read: IOPS=17, BW=69.6KiB/s (71.2kB/s)(72.0KiB/1035msec) 00:09:12.159 slat (nsec): min=10961, max=31183, avg=24362.89, stdev=4011.10 00:09:12.159 clat (usec): min=41843, max=42070, avg=41968.35, stdev=68.80 00:09:12.159 lat (usec): min=41868, max=42101, avg=41992.71, stdev=69.30 00:09:12.159 clat percentiles (usec): 00:09:12.159 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:09:12.159 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:12.159 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:12.159 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:12.159 | 99.99th=[42206] 00:09:12.159 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:09:12.159 slat (nsec): min=9676, max=60824, avg=27249.07, stdev=10135.97 00:09:12.159 clat (usec): min=133, max=879, avg=510.68, stdev=126.48 00:09:12.159 lat (usec): min=144, max=929, avg=537.93, stdev=129.82 00:09:12.159 clat percentiles (usec): 00:09:12.159 | 1.00th=[ 161], 5.00th=[ 285], 10.00th=[ 355], 20.00th=[ 408], 00:09:12.159 | 30.00th=[ 445], 40.00th=[ 486], 50.00th=[ 519], 60.00th=[ 537], 00:09:12.159 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 676], 95.00th=[ 717], 00:09:12.159 | 99.00th=[ 758], 99.50th=[ 775], 99.90th=[ 881], 99.95th=[ 881], 00:09:12.159 | 99.99th=[ 881] 00:09:12.159 bw ( KiB/s): min= 4096, max= 4096, per=42.22%, avg=4096.00, stdev= 0.00, samples=1 00:09:12.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:12.159 lat (usec) : 250=2.08%, 500=40.94%, 750=52.08%, 1000=1.51% 00:09:12.159 lat (msec) : 50=3.40% 00:09:12.159 cpu : usr=0.97%, sys=1.06%, ctx=530, majf=0, minf=1 00:09:12.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.159 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.159 job3: (groupid=0, jobs=1): err= 0: pid=3824679: Tue Oct 1 15:06:21 2024 00:09:12.159 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:12.159 slat (nsec): min=7168, max=57804, avg=25056.51, stdev=5474.71 00:09:12.159 clat (usec): min=445, max=1057, avg=808.09, stdev=122.41 00:09:12.159 lat (usec): min=471, max=1068, avg=833.14, stdev=122.87 00:09:12.159 clat percentiles (usec): 00:09:12.159 | 1.00th=[ 510], 5.00th=[ 570], 10.00th=[ 635], 20.00th=[ 701], 00:09:12.159 | 30.00th=[ 750], 40.00th=[ 783], 50.00th=[ 824], 60.00th=[ 873], 00:09:12.159 | 70.00th=[ 898], 80.00th=[ 922], 90.00th=[ 947], 95.00th=[ 963], 00:09:12.159 | 99.00th=[ 996], 99.50th=[ 1012], 99.90th=[ 1057], 99.95th=[ 1057], 00:09:12.159 | 99.99th=[ 1057] 00:09:12.159 write: IOPS=973, BW=3892KiB/s (3986kB/s)(3896KiB/1001msec); 0 zone resets 00:09:12.159 slat (nsec): min=9635, max=65749, avg=30177.52, stdev=7962.26 00:09:12.159 clat (usec): min=134, max=956, avg=546.51, stdev=117.62 00:09:12.159 lat (usec): min=145, max=988, avg=576.69, stdev=119.55 00:09:12.159 clat percentiles (usec): 00:09:12.159 | 1.00th=[ 253], 5.00th=[ 359], 10.00th=[ 400], 20.00th=[ 461], 00:09:12.159 | 30.00th=[ 494], 40.00th=[ 515], 50.00th=[ 545], 60.00th=[ 578], 00:09:12.159 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 693], 95.00th=[ 734], 00:09:12.159 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 955], 99.95th=[ 955], 00:09:12.159 | 99.99th=[ 955] 00:09:12.159 bw ( KiB/s): min= 4096, max= 4096, per=42.22%, avg=4096.00, stdev= 0.00, samples=1 00:09:12.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:12.159 lat (usec) : 250=0.54%, 500=21.94%, 750=51.08%, 1000=26.11% 00:09:12.159 lat (msec) : 2=0.34% 00:09:12.159 cpu : usr=2.30%, sys=4.30%, ctx=1486, majf=0, minf=1 00:09:12.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:12.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.160 issued rwts: total=512,974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:12.160 00:09:12.160 Run status group 0 (all jobs): 00:09:12.160 READ: bw=2191KiB/s (2244kB/s), 66.1KiB/s-2046KiB/s (67.7kB/s-2095kB/s), io=2268KiB (2322kB), run=1001-1035msec 00:09:12.160 WRITE: bw=9700KiB/s (9933kB/s), 1979KiB/s-3892KiB/s (2026kB/s-3986kB/s), io=9.80MiB (10.3MB), run=1001-1035msec 00:09:12.160 00:09:12.160 Disk stats (read/write): 00:09:12.160 nvme0n1: ios=62/512, merge=0/0, ticks=555/272, in_queue=827, util=87.27% 00:09:12.160 nvme0n2: ios=54/512, merge=0/0, ticks=611/155, in_queue=766, util=87.55% 00:09:12.160 nvme0n3: ios=13/512, merge=0/0, ticks=546/238, in_queue=784, util=88.37% 00:09:12.160 nvme0n4: ios=512/673, merge=0/0, ticks=399/338, in_queue=737, util=89.51% 00:09:12.160 15:06:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:12.160 [global] 00:09:12.160 thread=1 00:09:12.160 invalidate=1 00:09:12.160 rw=write 00:09:12.160 time_based=1 00:09:12.160 runtime=1 00:09:12.160 ioengine=libaio 00:09:12.160 direct=1 00:09:12.160 bs=4096 00:09:12.160 iodepth=128 00:09:12.160 norandommap=0 00:09:12.160 numjobs=1 00:09:12.160 00:09:12.160 verify_dump=1 00:09:12.160 verify_backlog=512 00:09:12.160 verify_state_save=0 00:09:12.160 do_verify=1 00:09:12.160 verify=crc32c-intel 00:09:12.160 [job0] 00:09:12.160 filename=/dev/nvme0n1 00:09:12.160 [job1] 00:09:12.160 filename=/dev/nvme0n2 00:09:12.160 [job2] 00:09:12.160 filename=/dev/nvme0n3 00:09:12.160 [job3] 00:09:12.160 filename=/dev/nvme0n4 00:09:12.160 Could not set queue depth (nvme0n1) 00:09:12.160 Could not set queue depth (nvme0n2) 00:09:12.160 Could not set queue depth (nvme0n3) 00:09:12.160 Could not set queue depth (nvme0n4) 00:09:12.422 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.422 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.422 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.422 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.422 fio-3.35 00:09:12.422 Starting 4 threads 00:09:13.807 00:09:13.807 job0: (groupid=0, jobs=1): err= 0: pid=3825201: Tue Oct 1 15:06:23 2024 00:09:13.807 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:09:13.807 slat (nsec): min=946, max=13096k, avg=109778.71, stdev=754019.16 00:09:13.807 clat (usec): min=4168, max=55955, avg=12905.33, stdev=5791.69 00:09:13.808 lat (usec): min=4175, max=55957, avg=13015.11, stdev=5860.73 00:09:13.808 clat percentiles (usec): 00:09:13.808 | 1.00th=[ 4555], 5.00th=[ 6849], 10.00th=[ 7308], 20.00th=[ 7963], 00:09:13.808 | 30.00th=[ 8979], 40.00th=[ 9896], 50.00th=[11863], 60.00th=[13566], 00:09:13.808 | 70.00th=[15270], 80.00th=[16909], 90.00th=[19792], 95.00th=[21103], 00:09:13.808 | 99.00th=[33817], 99.50th=[42206], 99.90th=[55837], 99.95th=[55837], 00:09:13.808 | 99.99th=[55837] 00:09:13.808 write: IOPS=4255, BW=16.6MiB/s (17.4MB/s)(16.8MiB/1009msec); 0 zone resets 00:09:13.808 slat (nsec): min=1632, max=10764k, avg=122548.98, stdev=629180.22 00:09:13.808 clat (usec): min=1111, max=65526, avg=17503.88, stdev=11862.63 00:09:13.808 lat (usec): min=1133, max=65535, avg=17626.43, stdev=11920.37 00:09:13.808 clat percentiles (usec): 00:09:13.808 | 1.00th=[ 4080], 5.00th=[ 5407], 10.00th=[ 7439], 20.00th=[ 9110], 00:09:13.808 | 30.00th=[11600], 40.00th=[12125], 50.00th=[14091], 60.00th=[17171], 00:09:13.808 | 70.00th=[19006], 80.00th=[21103], 90.00th=[32113], 95.00th=[47449], 00:09:13.808 | 99.00th=[61080], 99.50th=[62653], 99.90th=[65274], 99.95th=[65274], 00:09:13.808 | 99.99th=[65274] 00:09:13.808 bw ( KiB/s): min=12856, max=20480, per=22.29%, avg=16668.00, stdev=5390.98, samples=2 00:09:13.808 iops : min= 3214, max= 5120, avg=4167.00, stdev=1347.75, samples=2 00:09:13.808 lat (msec) : 2=0.02%, 4=0.46%, 10=30.60%, 20=51.74%, 50=15.03% 00:09:13.808 lat (msec) : 100=2.15% 00:09:13.808 cpu : usr=3.57%, sys=4.27%, ctx=426, majf=0, minf=1 00:09:13.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:13.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.808 issued rwts: total=4096,4294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.808 job1: (groupid=0, jobs=1): err= 0: pid=3825202: Tue Oct 1 15:06:23 2024 00:09:13.808 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:09:13.808 slat (usec): min=3, max=34131, avg=255.44, stdev=1942.55 00:09:13.808 clat (msec): min=9, max=108, avg=35.48, stdev=26.86 00:09:13.808 lat (msec): min=12, max=108, avg=35.73, stdev=26.98 00:09:13.808 clat percentiles (msec): 00:09:13.808 | 1.00th=[ 13], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 18], 00:09:13.808 | 30.00th=[ 18], 40.00th=[ 21], 50.00th=[ 23], 60.00th=[ 28], 00:09:13.808 | 70.00th=[ 38], 80.00th=[ 59], 90.00th=[ 88], 95.00th=[ 101], 00:09:13.808 | 99.00th=[ 109], 99.50th=[ 109], 99.90th=[ 109], 99.95th=[ 109], 00:09:13.808 | 99.99th=[ 109] 00:09:13.808 write: IOPS=2293, BW=9174KiB/s (9394kB/s)(9220KiB/1005msec); 0 zone resets 00:09:13.808 slat (usec): min=6, max=22622, avg=200.26, stdev=1338.15 00:09:13.808 clat (usec): min=4554, max=63998, avg=22903.65, stdev=11955.04 00:09:13.808 lat (usec): min=11003, max=64008, avg=23103.91, stdev=12001.14 00:09:13.808 clat percentiles (usec): 00:09:13.808 | 1.00th=[10683], 5.00th=[11207], 10.00th=[11600], 20.00th=[12649], 00:09:13.808 | 30.00th=[13960], 40.00th=[15795], 50.00th=[18220], 60.00th=[22414], 00:09:13.808 | 70.00th=[27657], 80.00th=[32637], 90.00th=[39060], 95.00th=[43254], 00:09:13.808 | 99.00th=[63701], 99.50th=[64226], 99.90th=[64226], 99.95th=[64226], 00:09:13.808 | 99.99th=[64226] 00:09:13.808 bw ( KiB/s): min= 8192, max= 9232, per=11.65%, avg=8712.00, stdev=735.39, samples=2 00:09:13.808 iops : min= 2048, max= 2308, avg=2178.00, stdev=183.85, samples=2 00:09:13.808 lat (msec) : 10=0.41%, 20=45.07%, 50=42.13%, 100=10.96%, 250=1.42% 00:09:13.808 cpu : usr=2.09%, sys=2.59%, ctx=138, majf=0, minf=1 00:09:13.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:09:13.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.808 issued rwts: total=2048,2305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.808 job2: (groupid=0, jobs=1): err= 0: pid=3825203: Tue Oct 1 15:06:23 2024 00:09:13.808 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:09:13.808 slat (nsec): min=1049, max=13922k, avg=83013.28, stdev=687190.58 00:09:13.808 clat (usec): min=1247, max=40855, avg=10796.83, stdev=5084.25 00:09:13.808 lat (usec): min=1262, max=40863, avg=10879.84, stdev=5147.62 00:09:13.808 clat percentiles (usec): 00:09:13.808 | 1.00th=[ 1893], 5.00th=[ 3785], 10.00th=[ 6063], 20.00th=[ 7767], 00:09:13.808 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[10290], 00:09:13.808 | 70.00th=[11994], 80.00th=[14222], 90.00th=[16581], 95.00th=[18482], 00:09:13.808 | 99.00th=[34866], 99.50th=[38536], 99.90th=[40109], 99.95th=[40633], 00:09:13.808 | 99.99th=[40633] 00:09:13.808 write: IOPS=5871, BW=22.9MiB/s (24.1MB/s)(23.1MiB/1005msec); 0 zone resets 00:09:13.808 slat (nsec): min=1694, max=7848.5k, avg=72060.69, stdev=448950.42 00:09:13.808 clat (usec): min=1037, max=40801, avg=11272.36, stdev=7070.23 00:09:13.808 lat (usec): min=1051, max=40803, avg=11344.42, stdev=7119.04 00:09:13.808 clat percentiles (usec): 00:09:13.808 | 1.00th=[ 1893], 5.00th=[ 4146], 10.00th=[ 5014], 20.00th=[ 6521], 00:09:13.808 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 8979], 60.00th=[ 9634], 00:09:13.808 | 70.00th=[11731], 80.00th=[17171], 90.00th=[20841], 95.00th=[26084], 00:09:13.808 | 99.00th=[36439], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:09:13.808 | 99.99th=[40633] 00:09:13.808 bw ( KiB/s): min=21264, max=24928, per=30.89%, avg=23096.00, stdev=2590.84, samples=2 00:09:13.808 iops : min= 5316, max= 6232, avg=5774.00, stdev=647.71, samples=2 00:09:13.808 lat (msec) : 2=1.34%, 4=3.68%, 10=57.67%, 20=30.06%, 50=7.26% 00:09:13.808 cpu : usr=3.88%, sys=7.97%, ctx=469, majf=0, minf=1 00:09:13.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:13.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.808 issued rwts: total=5632,5901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.808 job3: (groupid=0, jobs=1): err= 0: pid=3825204: Tue Oct 1 15:06:23 2024 00:09:13.808 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:09:13.808 slat (nsec): min=923, max=3541.7k, avg=81757.95, stdev=434768.80 00:09:13.808 clat (usec): min=7086, max=14845, avg=10260.33, stdev=837.30 00:09:13.808 lat (usec): min=7091, max=14873, avg=10342.09, stdev=911.80 00:09:13.808 clat percentiles (usec): 00:09:13.808 | 1.00th=[ 8029], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[ 9896], 00:09:13.808 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:09:13.808 | 70.00th=[10421], 80.00th=[10552], 90.00th=[11207], 95.00th=[11731], 00:09:13.808 | 99.00th=[13173], 99.50th=[13566], 99.90th=[14222], 99.95th=[14615], 00:09:13.808 | 99.99th=[14877] 00:09:13.808 write: IOPS=6346, BW=24.8MiB/s (26.0MB/s)(24.8MiB/1002msec); 0 zone resets 00:09:13.808 slat (nsec): min=1562, max=3570.7k, avg=74261.14, stdev=325148.69 00:09:13.808 clat (usec): min=495, max=14447, avg=10004.33, stdev=1125.10 00:09:13.808 lat (usec): min=3521, max=14481, avg=10078.59, stdev=1120.62 00:09:13.808 clat percentiles (usec): 00:09:13.808 | 1.00th=[ 6980], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[ 9503], 00:09:13.808 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:09:13.808 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:09:13.808 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14091], 99.95th=[14091], 00:09:13.808 | 99.99th=[14484] 00:09:13.808 bw ( KiB/s): min=24808, max=25048, per=33.34%, avg=24928.00, stdev=169.71, samples=2 00:09:13.808 iops : min= 6202, max= 6262, avg=6232.00, stdev=42.43, samples=2 00:09:13.808 lat (usec) : 500=0.01% 00:09:13.808 lat (msec) : 4=0.34%, 10=41.85%, 20=57.80% 00:09:13.808 cpu : usr=3.10%, sys=6.29%, ctx=791, majf=0, minf=1 00:09:13.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:13.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.808 issued rwts: total=6144,6359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.808 00:09:13.808 Run status group 0 (all jobs): 00:09:13.808 READ: bw=69.4MiB/s (72.7MB/s), 8151KiB/s-24.0MiB/s (8347kB/s-25.1MB/s), io=70.0MiB (73.4MB), run=1002-1009msec 00:09:13.808 WRITE: bw=73.0MiB/s (76.6MB/s), 9174KiB/s-24.8MiB/s (9394kB/s-26.0MB/s), io=73.7MiB (77.2MB), run=1002-1009msec 00:09:13.808 00:09:13.808 Disk stats (read/write): 00:09:13.808 nvme0n1: ios=3300/3584, merge=0/0, ticks=42968/59520, in_queue=102488, util=87.78% 00:09:13.808 nvme0n2: ios=1578/2002, merge=0/0, ticks=14864/10762, in_queue=25626, util=88.57% 00:09:13.808 nvme0n3: ios=4652/4679, merge=0/0, ticks=50221/49021, in_queue=99242, util=100.00% 00:09:13.808 nvme0n4: ios=5120/5373, merge=0/0, ticks=16995/16373, in_queue=33368, util=89.52% 00:09:13.808 15:06:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:13.808 [global] 00:09:13.808 thread=1 00:09:13.808 invalidate=1 00:09:13.808 rw=randwrite 00:09:13.808 time_based=1 00:09:13.808 runtime=1 00:09:13.808 ioengine=libaio 00:09:13.808 direct=1 00:09:13.808 bs=4096 00:09:13.808 iodepth=128 00:09:13.808 norandommap=0 00:09:13.808 numjobs=1 00:09:13.808 00:09:13.808 verify_dump=1 00:09:13.808 verify_backlog=512 00:09:13.808 verify_state_save=0 00:09:13.808 do_verify=1 00:09:13.808 verify=crc32c-intel 00:09:13.808 [job0] 00:09:13.808 filename=/dev/nvme0n1 00:09:13.808 [job1] 00:09:13.808 filename=/dev/nvme0n2 00:09:13.808 [job2] 00:09:13.808 filename=/dev/nvme0n3 00:09:13.808 [job3] 00:09:13.808 filename=/dev/nvme0n4 00:09:13.808 Could not set queue depth (nvme0n1) 00:09:13.808 Could not set queue depth (nvme0n2) 00:09:13.808 Could not set queue depth (nvme0n3) 00:09:13.808 Could not set queue depth (nvme0n4) 00:09:14.069 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:14.069 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:14.069 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:14.069 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:14.069 fio-3.35 00:09:14.069 Starting 4 threads 00:09:15.454 00:09:15.454 job0: (groupid=0, jobs=1): err= 0: pid=3825725: Tue Oct 1 15:06:25 2024 00:09:15.454 read: IOPS=5136, BW=20.1MiB/s (21.0MB/s)(21.0MiB/1047msec) 00:09:15.454 slat (nsec): min=898, max=11683k, avg=92003.92, stdev=651746.97 00:09:15.454 clat (usec): min=835, max=53116, avg=12966.90, stdev=8588.25 00:09:15.454 lat (usec): min=839, max=53123, avg=13058.90, stdev=8630.76 00:09:15.454 clat percentiles (usec): 00:09:15.454 | 1.00th=[ 2114], 5.00th=[ 4621], 10.00th=[ 7373], 20.00th=[ 8094], 00:09:15.454 | 30.00th=[ 9110], 40.00th=[10290], 50.00th=[10814], 60.00th=[11600], 00:09:15.454 | 70.00th=[12911], 80.00th=[14222], 90.00th=[21103], 95.00th=[32375], 00:09:15.454 | 99.00th=[50594], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:09:15.454 | 99.99th=[53216] 00:09:15.454 write: IOPS=5663, BW=22.1MiB/s (23.2MB/s)(23.2MiB/1047msec); 0 zone resets 00:09:15.454 slat (nsec): min=1504, max=15456k, avg=77431.32, stdev=586925.33 00:09:15.454 clat (usec): min=757, max=63790, avg=10642.46, stdev=5548.13 00:09:15.454 lat (usec): min=765, max=71167, avg=10719.89, stdev=5594.31 00:09:15.454 clat percentiles (usec): 00:09:15.454 | 1.00th=[ 1942], 5.00th=[ 3589], 10.00th=[ 5342], 20.00th=[ 7046], 00:09:15.454 | 30.00th=[ 7701], 40.00th=[ 8848], 50.00th=[ 9896], 60.00th=[10683], 00:09:15.454 | 70.00th=[11600], 80.00th=[13566], 90.00th=[16712], 95.00th=[20055], 00:09:15.454 | 99.00th=[32113], 99.50th=[39060], 99.90th=[41681], 99.95th=[61080], 00:09:15.454 | 99.99th=[63701] 00:09:15.454 bw ( KiB/s): min=20000, max=27440, per=26.92%, avg=23720.00, stdev=5260.87, samples=2 00:09:15.454 iops : min= 5000, max= 6860, avg=5930.00, stdev=1315.22, samples=2 00:09:15.454 lat (usec) : 1000=0.10% 00:09:15.454 lat (msec) : 2=0.83%, 4=3.56%, 10=39.72%, 20=48.34%, 50=6.75% 00:09:15.454 lat (msec) : 100=0.71% 00:09:15.454 cpu : usr=2.77%, sys=6.98%, ctx=375, majf=0, minf=1 00:09:15.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:15.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.454 issued rwts: total=5378,5930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.454 job1: (groupid=0, jobs=1): err= 0: pid=3825726: Tue Oct 1 15:06:25 2024 00:09:15.454 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:09:15.454 slat (nsec): min=903, max=24607k, avg=119926.56, stdev=942394.08 00:09:15.454 clat (usec): min=4518, max=60275, avg=14677.16, stdev=8058.11 00:09:15.454 lat (usec): min=4524, max=60281, avg=14797.09, stdev=8135.20 00:09:15.454 clat percentiles (usec): 00:09:15.454 | 1.00th=[ 7242], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[ 9896], 00:09:15.454 | 30.00th=[10814], 40.00th=[11338], 50.00th=[12125], 60.00th=[12780], 00:09:15.454 | 70.00th=[14091], 80.00th=[16581], 90.00th=[24511], 95.00th=[29492], 00:09:15.454 | 99.00th=[45351], 99.50th=[49021], 99.90th=[60031], 99.95th=[60031], 00:09:15.454 | 99.99th=[60031] 00:09:15.454 write: IOPS=4935, BW=19.3MiB/s (20.2MB/s)(19.5MiB/1011msec); 0 zone resets 00:09:15.454 slat (nsec): min=1496, max=10566k, avg=84742.79, stdev=490512.47 00:09:15.454 clat (usec): min=1270, max=60281, avg=12146.98, stdev=5788.68 00:09:15.454 lat (usec): min=1278, max=61581, avg=12231.72, stdev=5812.90 00:09:15.454 clat percentiles (usec): 00:09:15.454 | 1.00th=[ 3818], 5.00th=[ 5669], 10.00th=[ 6980], 20.00th=[ 8717], 00:09:15.454 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10683], 60.00th=[11076], 00:09:15.454 | 70.00th=[12125], 80.00th=[16188], 90.00th=[18744], 95.00th=[21627], 00:09:15.454 | 99.00th=[39060], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:09:15.454 | 99.99th=[60031] 00:09:15.454 bw ( KiB/s): min=16384, max=22512, per=22.07%, avg=19448.00, stdev=4333.15, samples=2 00:09:15.454 iops : min= 4096, max= 5628, avg=4862.00, stdev=1083.29, samples=2 00:09:15.454 lat (msec) : 2=0.03%, 4=0.52%, 10=29.37%, 20=60.01%, 50=9.80% 00:09:15.454 lat (msec) : 100=0.26% 00:09:15.454 cpu : usr=3.47%, sys=4.65%, ctx=459, majf=0, minf=1 00:09:15.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:15.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.454 issued rwts: total=4608,4990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.454 job2: (groupid=0, jobs=1): err= 0: pid=3825727: Tue Oct 1 15:06:25 2024 00:09:15.454 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:09:15.454 slat (nsec): min=938, max=18486k, avg=88570.47, stdev=662163.40 00:09:15.454 clat (usec): min=1601, max=34086, avg=11369.20, stdev=5046.94 00:09:15.454 lat (usec): min=1627, max=43016, avg=11457.77, stdev=5087.90 00:09:15.454 clat percentiles (usec): 00:09:15.454 | 1.00th=[ 2089], 5.00th=[ 5932], 10.00th=[ 6718], 20.00th=[ 7701], 00:09:15.454 | 30.00th=[ 8455], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[11863], 00:09:15.454 | 70.00th=[12649], 80.00th=[13829], 90.00th=[16450], 95.00th=[24511], 00:09:15.454 | 99.00th=[28967], 99.50th=[31065], 99.90th=[34341], 99.95th=[34341], 00:09:15.454 | 99.99th=[34341] 00:09:15.454 write: IOPS=5956, BW=23.3MiB/s (24.4MB/s)(23.4MiB/1007msec); 0 zone resets 00:09:15.454 slat (nsec): min=1551, max=10512k, avg=73663.77, stdev=424833.98 00:09:15.454 clat (usec): min=651, max=30058, avg=10629.29, stdev=4498.12 00:09:15.454 lat (usec): min=661, max=30082, avg=10702.95, stdev=4533.86 00:09:15.454 clat percentiles (usec): 00:09:15.454 | 1.00th=[ 3064], 5.00th=[ 4359], 10.00th=[ 4948], 20.00th=[ 7308], 00:09:15.454 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 9765], 60.00th=[11469], 00:09:15.454 | 70.00th=[12911], 80.00th=[14484], 90.00th=[17433], 95.00th=[18482], 00:09:15.454 | 99.00th=[21365], 99.50th=[23462], 99.90th=[24773], 99.95th=[25560], 00:09:15.454 | 99.99th=[30016] 00:09:15.454 bw ( KiB/s): min=22072, max=24896, per=26.65%, avg=23484.00, stdev=1996.87, samples=2 00:09:15.454 iops : min= 5518, max= 6224, avg=5871.00, stdev=499.22, samples=2 00:09:15.454 lat (usec) : 750=0.03%, 1000=0.01% 00:09:15.454 lat (msec) : 2=0.39%, 4=2.18%, 10=46.80%, 20=45.96%, 50=4.64% 00:09:15.454 cpu : usr=3.58%, sys=6.96%, ctx=543, majf=0, minf=1 00:09:15.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:15.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.454 issued rwts: total=5632,5998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.454 job3: (groupid=0, jobs=1): err= 0: pid=3825728: Tue Oct 1 15:06:25 2024 00:09:15.454 read: IOPS=6010, BW=23.5MiB/s (24.6MB/s)(23.6MiB/1003msec) 00:09:15.454 slat (nsec): min=954, max=45211k, avg=83769.76, stdev=784142.22 00:09:15.454 clat (usec): min=828, max=52850, avg=10462.76, stdev=6091.49 00:09:15.454 lat (usec): min=1290, max=52859, avg=10546.53, stdev=6131.62 00:09:15.454 clat percentiles (usec): 00:09:15.454 | 1.00th=[ 2409], 5.00th=[ 4948], 10.00th=[ 6521], 20.00th=[ 7701], 00:09:15.454 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9634], 00:09:15.454 | 70.00th=[10814], 80.00th=[12780], 90.00th=[14746], 95.00th=[18220], 00:09:15.454 | 99.00th=[49546], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:09:15.454 | 99.99th=[52691] 00:09:15.454 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:09:15.454 slat (nsec): min=1567, max=12696k, avg=72321.52, stdev=459270.21 00:09:15.454 clat (usec): min=1290, max=64718, avg=10449.74, stdev=6744.03 00:09:15.454 lat (usec): min=1301, max=64727, avg=10522.06, stdev=6759.11 00:09:15.454 clat percentiles (usec): 00:09:15.454 | 1.00th=[ 4555], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 7111], 00:09:15.454 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8848], 00:09:15.454 | 70.00th=[10159], 80.00th=[12387], 90.00th=[18220], 95.00th=[19006], 00:09:15.455 | 99.00th=[51119], 99.50th=[52167], 99.90th=[61604], 99.95th=[61604], 00:09:15.455 | 99.99th=[64750] 00:09:15.455 bw ( KiB/s): min=24576, max=24576, per=27.89%, avg=24576.00, stdev= 0.00, samples=2 00:09:15.455 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:09:15.455 lat (usec) : 1000=0.01% 00:09:15.455 lat (msec) : 2=0.25%, 4=1.50%, 10=64.09%, 20=30.33%, 50=2.78% 00:09:15.455 lat (msec) : 100=1.04% 00:09:15.455 cpu : usr=3.49%, sys=7.19%, ctx=537, majf=0, minf=2 00:09:15.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:15.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.455 issued rwts: total=6029,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.455 00:09:15.455 Run status group 0 (all jobs): 00:09:15.455 READ: bw=80.8MiB/s (84.7MB/s), 17.8MiB/s-23.5MiB/s (18.7MB/s-24.6MB/s), io=84.6MiB (88.7MB), run=1003-1047msec 00:09:15.455 WRITE: bw=86.0MiB/s (90.2MB/s), 19.3MiB/s-23.9MiB/s (20.2MB/s-25.1MB/s), io=90.1MiB (94.5MB), run=1003-1047msec 00:09:15.455 00:09:15.455 Disk stats (read/write): 00:09:15.455 nvme0n1: ios=4813/5120, merge=0/0, ticks=31879/31134, in_queue=63013, util=86.97% 00:09:15.455 nvme0n2: ios=3869/4096, merge=0/0, ticks=39512/31083, in_queue=70595, util=96.43% 00:09:15.455 nvme0n3: ios=4351/4608, merge=0/0, ticks=33732/35678, in_queue=69410, util=88.29% 00:09:15.455 nvme0n4: ios=5141/5193, merge=0/0, ticks=39004/38875, in_queue=77879, util=91.77% 00:09:15.455 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:15.455 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3826057 00:09:15.455 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:15.455 15:06:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:15.455 [global] 00:09:15.455 thread=1 00:09:15.455 invalidate=1 00:09:15.455 rw=read 00:09:15.455 time_based=1 00:09:15.455 runtime=10 00:09:15.455 ioengine=libaio 00:09:15.455 direct=1 00:09:15.455 bs=4096 00:09:15.455 iodepth=1 00:09:15.455 norandommap=1 00:09:15.455 numjobs=1 00:09:15.455 00:09:15.455 [job0] 00:09:15.455 filename=/dev/nvme0n1 00:09:15.455 [job1] 00:09:15.455 filename=/dev/nvme0n2 00:09:15.455 [job2] 00:09:15.455 filename=/dev/nvme0n3 00:09:15.455 [job3] 00:09:15.455 filename=/dev/nvme0n4 00:09:15.455 Could not set queue depth (nvme0n1) 00:09:15.455 Could not set queue depth (nvme0n2) 00:09:15.455 Could not set queue depth (nvme0n3) 00:09:15.455 Could not set queue depth (nvme0n4) 00:09:16.024 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.024 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.024 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.024 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.024 fio-3.35 00:09:16.024 Starting 4 threads 00:09:18.577 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:18.577 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:18.577 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=708608, buflen=4096 00:09:18.577 fio: pid=3826253, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:18.837 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:18.837 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:18.837 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9007104, buflen=4096 00:09:18.837 fio: pid=3826252, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:18.837 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3354624, buflen=4096 00:09:18.837 fio: pid=3826250, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:18.837 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:18.837 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:19.096 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4755456, buflen=4096 00:09:19.096 fio: pid=3826251, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:19.096 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.096 15:06:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:19.096 00:09:19.096 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3826250: Tue Oct 1 15:06:28 2024 00:09:19.096 read: IOPS=278, BW=1114KiB/s (1141kB/s)(3276KiB/2940msec) 00:09:19.096 slat (usec): min=6, max=15083, avg=91.97, stdev=973.39 00:09:19.096 clat (usec): min=468, max=41848, avg=3464.60, stdev=10049.63 00:09:19.096 lat (usec): min=475, max=41878, avg=3538.31, stdev=10071.07 00:09:19.096 clat percentiles (usec): 00:09:19.096 | 1.00th=[ 545], 5.00th=[ 635], 10.00th=[ 676], 20.00th=[ 734], 00:09:19.096 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 816], 00:09:19.096 | 70.00th=[ 848], 80.00th=[ 906], 90.00th=[ 1004], 95.00th=[41157], 00:09:19.096 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:19.096 | 99.99th=[41681] 00:09:19.096 bw ( KiB/s): min= 208, max= 3232, per=17.11%, avg=955.20, stdev=1279.57, samples=5 00:09:19.096 iops : min= 52, max= 808, avg=238.80, stdev=319.89, samples=5 00:09:19.096 lat (usec) : 500=0.49%, 750=25.85%, 1000=63.29% 00:09:19.096 lat (msec) : 2=3.66%, 50=6.59% 00:09:19.096 cpu : usr=0.41%, sys=0.68%, ctx=824, majf=0, minf=2 00:09:19.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.096 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.096 issued rwts: total=820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.096 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3826251: Tue Oct 1 15:06:28 2024 00:09:19.096 read: IOPS=372, BW=1489KiB/s (1525kB/s)(4644KiB/3118msec) 00:09:19.096 slat (usec): min=6, max=35933, avg=88.59, stdev=1258.47 00:09:19.096 clat (usec): min=173, max=41852, avg=2572.54, stdev=8399.19 00:09:19.096 lat (usec): min=181, max=41878, avg=2661.19, stdev=8479.99 00:09:19.096 clat percentiles (usec): 00:09:19.096 | 1.00th=[ 453], 5.00th=[ 570], 10.00th=[ 619], 20.00th=[ 668], 00:09:19.096 | 30.00th=[ 701], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:09:19.096 | 70.00th=[ 791], 80.00th=[ 824], 90.00th=[ 898], 95.00th=[ 1057], 00:09:19.096 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:19.096 | 99.99th=[41681] 00:09:19.096 bw ( KiB/s): min= 352, max= 3784, per=26.01%, avg=1452.00, stdev=1510.95, samples=6 00:09:19.096 iops : min= 88, max= 946, avg=363.00, stdev=377.74, samples=6 00:09:19.096 lat (usec) : 250=0.09%, 500=2.32%, 750=47.42%, 1000=44.32% 00:09:19.096 lat (msec) : 2=1.12%, 10=0.09%, 20=0.09%, 50=4.48% 00:09:19.096 cpu : usr=0.29%, sys=1.06%, ctx=1167, majf=0, minf=2 00:09:19.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.096 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.096 issued rwts: total=1162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.096 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3826252: Tue Oct 1 15:06:28 2024 00:09:19.097 read: IOPS=787, BW=3148KiB/s (3224kB/s)(8796KiB/2794msec) 00:09:19.097 slat (nsec): min=6315, max=70164, avg=23946.56, stdev=6680.57 00:09:19.097 clat (usec): min=337, max=41051, avg=1230.99, stdev=4262.11 00:09:19.097 lat (usec): min=363, max=41076, avg=1254.94, stdev=4262.21 00:09:19.097 clat percentiles (usec): 00:09:19.097 | 1.00th=[ 494], 5.00th=[ 611], 10.00th=[ 644], 20.00th=[ 693], 00:09:19.097 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 799], 00:09:19.097 | 70.00th=[ 824], 80.00th=[ 857], 90.00th=[ 898], 95.00th=[ 955], 00:09:19.097 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:19.097 | 99.99th=[41157] 00:09:19.097 bw ( KiB/s): min= 96, max= 5112, per=62.81%, avg=3507.20, stdev=2174.23, samples=5 00:09:19.097 iops : min= 24, max= 1278, avg=876.80, stdev=543.56, samples=5 00:09:19.097 lat (usec) : 500=1.09%, 750=35.05%, 1000=61.91% 00:09:19.097 lat (msec) : 2=0.77%, 50=1.14% 00:09:19.097 cpu : usr=0.72%, sys=2.33%, ctx=2201, majf=0, minf=2 00:09:19.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.097 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.097 issued rwts: total=2200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.097 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3826253: Tue Oct 1 15:06:28 2024 00:09:19.097 read: IOPS=66, BW=265KiB/s (272kB/s)(692KiB/2607msec) 00:09:19.097 slat (nsec): min=6926, max=40104, avg=21291.68, stdev=7801.27 00:09:19.097 clat (usec): min=541, max=42034, avg=14917.47, stdev=19497.08 00:09:19.097 lat (usec): min=567, max=42059, avg=14938.73, stdev=19497.36 00:09:19.097 clat percentiles (usec): 00:09:19.097 | 1.00th=[ 594], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 725], 00:09:19.097 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[ 807], 60.00th=[ 840], 00:09:19.097 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:19.097 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:19.097 | 99.99th=[42206] 00:09:19.097 bw ( KiB/s): min= 96, max= 888, per=4.89%, avg=273.60, stdev=344.99, samples=5 00:09:19.097 iops : min= 24, max= 222, avg=68.40, stdev=86.25, samples=5 00:09:19.097 lat (usec) : 750=28.74%, 1000=35.63% 00:09:19.097 lat (msec) : 2=0.57%, 50=34.48% 00:09:19.097 cpu : usr=0.23%, sys=0.00%, ctx=174, majf=0, minf=1 00:09:19.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.097 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.097 issued rwts: total=174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.097 00:09:19.097 Run status group 0 (all jobs): 00:09:19.097 READ: bw=5583KiB/s (5717kB/s), 265KiB/s-3148KiB/s (272kB/s-3224kB/s), io=17.0MiB (17.8MB), run=2607-3118msec 00:09:19.097 00:09:19.097 Disk stats (read/write): 00:09:19.097 nvme0n1: ios=763/0, merge=0/0, ticks=2777/0, in_queue=2777, util=93.92% 00:09:19.097 nvme0n2: ios=1151/0, merge=0/0, ticks=2961/0, in_queue=2961, util=93.37% 00:09:19.097 nvme0n3: ios=2194/0, merge=0/0, ticks=2475/0, in_queue=2475, util=95.99% 00:09:19.097 nvme0n4: ios=173/0, merge=0/0, ticks=2582/0, in_queue=2582, util=96.42% 00:09:19.356 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.356 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:19.614 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.614 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:19.614 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.614 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:19.873 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.873 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3826057 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:20.132 nvmf hotplug test: fio failed as expected 00:09:20.132 15:06:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:20.392 rmmod nvme_tcp 00:09:20.392 rmmod nvme_fabrics 00:09:20.392 rmmod nvme_keyring 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 3822367 ']' 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 3822367 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3822367 ']' 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3822367 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.392 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3822367 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3822367' 00:09:20.652 killing process with pid 3822367 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3822367 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3822367 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.652 15:06:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:23.196 00:09:23.196 real 0m29.241s 00:09:23.196 user 2m33.655s 00:09:23.196 sys 0m9.369s 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.196 ************************************ 00:09:23.196 END TEST nvmf_fio_target 00:09:23.196 ************************************ 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.196 ************************************ 00:09:23.196 START TEST nvmf_bdevio 00:09:23.196 ************************************ 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:23.196 * Looking for test storage... 00:09:23.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.196 --rc genhtml_branch_coverage=1 00:09:23.196 --rc genhtml_function_coverage=1 00:09:23.196 --rc genhtml_legend=1 00:09:23.196 --rc geninfo_all_blocks=1 00:09:23.196 --rc geninfo_unexecuted_blocks=1 00:09:23.196 00:09:23.196 ' 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.196 --rc genhtml_branch_coverage=1 00:09:23.196 --rc genhtml_function_coverage=1 00:09:23.196 --rc genhtml_legend=1 00:09:23.196 --rc geninfo_all_blocks=1 00:09:23.196 --rc geninfo_unexecuted_blocks=1 00:09:23.196 00:09:23.196 ' 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.196 --rc genhtml_branch_coverage=1 00:09:23.196 --rc genhtml_function_coverage=1 00:09:23.196 --rc genhtml_legend=1 00:09:23.196 --rc geninfo_all_blocks=1 00:09:23.196 --rc geninfo_unexecuted_blocks=1 00:09:23.196 00:09:23.196 ' 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:23.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.196 --rc genhtml_branch_coverage=1 00:09:23.196 --rc genhtml_function_coverage=1 00:09:23.196 --rc genhtml_legend=1 00:09:23.196 --rc geninfo_all_blocks=1 00:09:23.196 --rc geninfo_unexecuted_blocks=1 00:09:23.196 00:09:23.196 ' 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.196 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:23.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:23.197 15:06:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:31.418 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:31.418 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.418 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:31.419 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:31.419 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # is_hw=yes 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.419 15:06:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:09:31.419 00:09:31.419 --- 10.0.0.2 ping statistics --- 00:09:31.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.419 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:09:31.419 00:09:31.419 --- 10.0.0.1 ping statistics --- 00:09:31.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.419 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # return 0 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=3831424 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 3831424 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3831424 ']' 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.419 15:06:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.419 [2024-10-01 15:06:40.320041] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:09:31.419 [2024-10-01 15:06:40.320109] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.419 [2024-10-01 15:06:40.409765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.419 [2024-10-01 15:06:40.503567] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.419 [2024-10-01 15:06:40.503626] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.419 [2024-10-01 15:06:40.503636] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.419 [2024-10-01 15:06:40.503644] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.419 [2024-10-01 15:06:40.503650] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.419 [2024-10-01 15:06:40.503814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:31.419 [2024-10-01 15:06:40.503968] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:31.419 [2024-10-01 15:06:40.504129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:31.419 [2024-10-01 15:06:40.504264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.419 [2024-10-01 15:06:41.198405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.419 Malloc0 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.419 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:31.419 [2024-10-01 15:06:41.263304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.420 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.420 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:31.420 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:31.420 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:09:31.420 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:09:31.420 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:31.420 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:31.420 { 00:09:31.420 "params": { 00:09:31.420 "name": "Nvme$subsystem", 00:09:31.420 "trtype": "$TEST_TRANSPORT", 00:09:31.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.420 "adrfam": "ipv4", 00:09:31.420 "trsvcid": "$NVMF_PORT", 00:09:31.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.420 "hdgst": ${hdgst:-false}, 00:09:31.420 "ddgst": ${ddgst:-false} 00:09:31.420 }, 00:09:31.420 "method": "bdev_nvme_attach_controller" 00:09:31.420 } 00:09:31.420 EOF 00:09:31.420 )") 00:09:31.420 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:09:31.680 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:09:31.680 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:09:31.680 15:06:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:31.680 "params": { 00:09:31.680 "name": "Nvme1", 00:09:31.680 "trtype": "tcp", 00:09:31.680 "traddr": "10.0.0.2", 00:09:31.680 "adrfam": "ipv4", 00:09:31.680 "trsvcid": "4420", 00:09:31.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.680 "hdgst": false, 00:09:31.680 "ddgst": false 00:09:31.680 }, 00:09:31.680 "method": "bdev_nvme_attach_controller" 00:09:31.680 }' 00:09:31.680 [2024-10-01 15:06:41.321848] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:09:31.680 [2024-10-01 15:06:41.321920] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3831665 ] 00:09:31.680 [2024-10-01 15:06:41.389506] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:31.680 [2024-10-01 15:06:41.465367] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.680 [2024-10-01 15:06:41.465491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.680 [2024-10-01 15:06:41.465494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.940 I/O targets: 00:09:31.940 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:31.940 00:09:31.940 00:09:31.940 CUnit - A unit testing framework for C - Version 2.1-3 00:09:31.940 http://cunit.sourceforge.net/ 00:09:31.940 00:09:31.940 00:09:31.940 Suite: bdevio tests on: Nvme1n1 00:09:32.201 Test: blockdev write read block ...passed 00:09:32.201 Test: blockdev write zeroes read block ...passed 00:09:32.201 Test: blockdev write zeroes read no split ...passed 00:09:32.201 Test: blockdev write zeroes read split ...passed 00:09:32.201 Test: blockdev write zeroes read split partial ...passed 00:09:32.201 Test: blockdev reset ...[2024-10-01 15:06:41.891921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:32.201 [2024-10-01 15:06:41.891984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d80d0 (9): Bad file descriptor 00:09:32.201 [2024-10-01 15:06:41.913380] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:32.201 passed 00:09:32.201 Test: blockdev write read 8 blocks ...passed 00:09:32.201 Test: blockdev write read size > 128k ...passed 00:09:32.201 Test: blockdev write read invalid size ...passed 00:09:32.201 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:32.201 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:32.201 Test: blockdev write read max offset ...passed 00:09:32.462 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:32.462 Test: blockdev writev readv 8 blocks ...passed 00:09:32.462 Test: blockdev writev readv 30 x 1block ...passed 00:09:32.462 Test: blockdev writev readv block ...passed 00:09:32.462 Test: blockdev writev readv size > 128k ...passed 00:09:32.462 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:32.462 Test: blockdev comparev and writev ...[2024-10-01 15:06:42.176022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.462 [2024-10-01 15:06:42.176048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:32.462 [2024-10-01 15:06:42.176059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.462 [2024-10-01 15:06:42.176065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:32.462 [2024-10-01 15:06:42.176531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.462 [2024-10-01 15:06:42.176539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:32.462 [2024-10-01 15:06:42.176549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.463 [2024-10-01 15:06:42.176555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:32.463 [2024-10-01 15:06:42.176982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.463 [2024-10-01 15:06:42.176989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:32.463 [2024-10-01 15:06:42.177002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.463 [2024-10-01 15:06:42.177007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:32.463 [2024-10-01 15:06:42.177449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.463 [2024-10-01 15:06:42.177458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:32.463 [2024-10-01 15:06:42.177468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:32.463 [2024-10-01 15:06:42.177474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:32.463 passed 00:09:32.463 Test: blockdev nvme passthru rw ...passed 00:09:32.463 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:06:42.261784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:32.463 [2024-10-01 15:06:42.261794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:32.463 [2024-10-01 15:06:42.262131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:32.463 [2024-10-01 15:06:42.262138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:32.463 [2024-10-01 15:06:42.262469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:32.463 [2024-10-01 15:06:42.262477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:32.463 [2024-10-01 15:06:42.262803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:32.463 [2024-10-01 15:06:42.262810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:32.463 passed 00:09:32.463 Test: blockdev nvme admin passthru ...passed 00:09:32.724 Test: blockdev copy ...passed 00:09:32.724 00:09:32.724 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.724 suites 1 1 n/a 0 0 00:09:32.724 tests 23 23 23 0 0 00:09:32.724 asserts 152 152 152 0 n/a 00:09:32.724 00:09:32.724 Elapsed time = 1.108 seconds 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.724 rmmod nvme_tcp 00:09:32.724 rmmod nvme_fabrics 00:09:32.724 rmmod nvme_keyring 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 3831424 ']' 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 3831424 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3831424 ']' 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3831424 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.724 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3831424 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3831424' 00:09:32.984 killing process with pid 3831424 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3831424 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3831424 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.984 15:06:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.528 15:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.528 00:09:35.528 real 0m12.345s 00:09:35.528 user 0m13.604s 00:09:35.528 sys 0m6.308s 00:09:35.528 15:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.528 15:06:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:35.528 ************************************ 00:09:35.528 END TEST nvmf_bdevio 00:09:35.528 ************************************ 00:09:35.528 15:06:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:35.528 00:09:35.528 real 5m2.180s 00:09:35.528 user 11m38.899s 00:09:35.528 sys 1m48.200s 00:09:35.528 15:06:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.528 15:06:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.528 ************************************ 00:09:35.528 END TEST nvmf_target_core 00:09:35.528 ************************************ 00:09:35.528 15:06:44 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:35.528 15:06:44 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:35.528 15:06:44 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.528 15:06:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:35.528 ************************************ 00:09:35.528 START TEST nvmf_target_extra 00:09:35.528 ************************************ 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:35.528 * Looking for test storage... 00:09:35.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:35.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.528 --rc genhtml_branch_coverage=1 00:09:35.528 --rc genhtml_function_coverage=1 00:09:35.528 --rc genhtml_legend=1 00:09:35.528 --rc geninfo_all_blocks=1 00:09:35.528 --rc geninfo_unexecuted_blocks=1 00:09:35.528 00:09:35.528 ' 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:35.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.528 --rc genhtml_branch_coverage=1 00:09:35.528 --rc genhtml_function_coverage=1 00:09:35.528 --rc genhtml_legend=1 00:09:35.528 --rc geninfo_all_blocks=1 00:09:35.528 --rc geninfo_unexecuted_blocks=1 00:09:35.528 00:09:35.528 ' 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:35.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.528 --rc genhtml_branch_coverage=1 00:09:35.528 --rc genhtml_function_coverage=1 00:09:35.528 --rc genhtml_legend=1 00:09:35.528 --rc geninfo_all_blocks=1 00:09:35.528 --rc geninfo_unexecuted_blocks=1 00:09:35.528 00:09:35.528 ' 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:35.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.528 --rc genhtml_branch_coverage=1 00:09:35.528 --rc genhtml_function_coverage=1 00:09:35.528 --rc genhtml_legend=1 00:09:35.528 --rc geninfo_all_blocks=1 00:09:35.528 --rc geninfo_unexecuted_blocks=1 00:09:35.528 00:09:35.528 ' 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.528 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:35.529 ************************************ 00:09:35.529 START TEST nvmf_example 00:09:35.529 ************************************ 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:35.529 * Looking for test storage... 00:09:35.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:09:35.529 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:35.790 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:35.790 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.790 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.790 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.790 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.790 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:35.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.791 --rc genhtml_branch_coverage=1 00:09:35.791 --rc genhtml_function_coverage=1 00:09:35.791 --rc genhtml_legend=1 00:09:35.791 --rc geninfo_all_blocks=1 00:09:35.791 --rc geninfo_unexecuted_blocks=1 00:09:35.791 00:09:35.791 ' 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:35.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.791 --rc genhtml_branch_coverage=1 00:09:35.791 --rc genhtml_function_coverage=1 00:09:35.791 --rc genhtml_legend=1 00:09:35.791 --rc geninfo_all_blocks=1 00:09:35.791 --rc geninfo_unexecuted_blocks=1 00:09:35.791 00:09:35.791 ' 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:35.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.791 --rc genhtml_branch_coverage=1 00:09:35.791 --rc genhtml_function_coverage=1 00:09:35.791 --rc genhtml_legend=1 00:09:35.791 --rc geninfo_all_blocks=1 00:09:35.791 --rc geninfo_unexecuted_blocks=1 00:09:35.791 00:09:35.791 ' 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:35.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.791 --rc genhtml_branch_coverage=1 00:09:35.791 --rc genhtml_function_coverage=1 00:09:35.791 --rc genhtml_legend=1 00:09:35.791 --rc geninfo_all_blocks=1 00:09:35.791 --rc geninfo_unexecuted_blocks=1 00:09:35.791 00:09:35.791 ' 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:35.791 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:35.792 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:35.792 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.792 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:35.792 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:35.792 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:35.792 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.792 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.792 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.792 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:35.792 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:35.792 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.792 15:06:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.931 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:43.932 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:43.932 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:43.932 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ up == up ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:43.932 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # is_hw=yes 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:43.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:09:43.932 00:09:43.932 --- 10.0.0.2 ping statistics --- 00:09:43.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.932 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:09:43.932 00:09:43.932 --- 10.0.0.1 ping statistics --- 00:09:43.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.932 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # return 0 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3836353 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3836353 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3836353 ']' 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.932 15:06:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:43.933 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.933 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:09:43.933 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:43.933 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.933 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:44.193 15:06:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:54.188 Initializing NVMe Controllers 00:09:54.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:54.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:54.188 Initialization complete. Launching workers. 00:09:54.188 ======================================================== 00:09:54.188 Latency(us) 00:09:54.188 Device Information : IOPS MiB/s Average min max 00:09:54.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18574.70 72.56 3445.02 609.36 16015.46 00:09:54.188 ======================================================== 00:09:54.188 Total : 18574.70 72.56 3445.02 609.36 16015.46 00:09:54.188 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.450 rmmod nvme_tcp 00:09:54.450 rmmod nvme_fabrics 00:09:54.450 rmmod nvme_keyring 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@513 -- # '[' -n 3836353 ']' 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # killprocess 3836353 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3836353 ']' 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3836353 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3836353 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3836353' 00:09:54.450 killing process with pid 3836353 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3836353 00:09:54.450 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3836353 00:09:54.710 nvmf threads initialize successfully 00:09:54.710 bdev subsystem init successfully 00:09:54.710 created a nvmf target service 00:09:54.711 create targets's poll groups done 00:09:54.711 all subsystems of target started 00:09:54.711 nvmf target is running 00:09:54.711 all subsystems of target stopped 00:09:54.711 destroy targets's poll groups done 00:09:54.711 destroyed the nvmf target service 00:09:54.711 bdev subsystem finish successfully 00:09:54.711 nvmf threads destroy successfully 00:09:54.711 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:54.711 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:54.711 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:54.711 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:54.711 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-save 00:09:54.711 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:54.711 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@787 -- # iptables-restore 00:09:54.711 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.711 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:54.711 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.711 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.711 15:07:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.622 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:56.622 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:56.622 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:56.622 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:56.622 00:09:56.622 real 0m21.177s 00:09:56.622 user 0m46.417s 00:09:56.622 sys 0m6.669s 00:09:56.622 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.622 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:56.622 ************************************ 00:09:56.622 END TEST nvmf_example 00:09:56.622 ************************************ 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:56.883 ************************************ 00:09:56.883 START TEST nvmf_filesystem 00:09:56.883 ************************************ 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:56.883 * Looking for test storage... 00:09:56.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:56.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.883 --rc genhtml_branch_coverage=1 00:09:56.883 --rc genhtml_function_coverage=1 00:09:56.883 --rc genhtml_legend=1 00:09:56.883 --rc geninfo_all_blocks=1 00:09:56.883 --rc geninfo_unexecuted_blocks=1 00:09:56.883 00:09:56.883 ' 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:56.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.883 --rc genhtml_branch_coverage=1 00:09:56.883 --rc genhtml_function_coverage=1 00:09:56.883 --rc genhtml_legend=1 00:09:56.883 --rc geninfo_all_blocks=1 00:09:56.883 --rc geninfo_unexecuted_blocks=1 00:09:56.883 00:09:56.883 ' 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:56.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.883 --rc genhtml_branch_coverage=1 00:09:56.883 --rc genhtml_function_coverage=1 00:09:56.883 --rc genhtml_legend=1 00:09:56.883 --rc geninfo_all_blocks=1 00:09:56.883 --rc geninfo_unexecuted_blocks=1 00:09:56.883 00:09:56.883 ' 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:56.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.883 --rc genhtml_branch_coverage=1 00:09:56.883 --rc genhtml_function_coverage=1 00:09:56.883 --rc genhtml_legend=1 00:09:56.883 --rc geninfo_all_blocks=1 00:09:56.883 --rc geninfo_unexecuted_blocks=1 00:09:56.883 00:09:56.883 ' 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:56.883 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:09:57.147 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:57.148 #define SPDK_CONFIG_H 00:09:57.148 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:57.148 #define SPDK_CONFIG_APPS 1 00:09:57.148 #define SPDK_CONFIG_ARCH native 00:09:57.148 #undef SPDK_CONFIG_ASAN 00:09:57.148 #undef SPDK_CONFIG_AVAHI 00:09:57.148 #undef SPDK_CONFIG_CET 00:09:57.148 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:57.148 #define SPDK_CONFIG_COVERAGE 1 00:09:57.148 #define SPDK_CONFIG_CROSS_PREFIX 00:09:57.148 #undef SPDK_CONFIG_CRYPTO 00:09:57.148 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:57.148 #undef SPDK_CONFIG_CUSTOMOCF 00:09:57.148 #undef SPDK_CONFIG_DAOS 00:09:57.148 #define SPDK_CONFIG_DAOS_DIR 00:09:57.148 #define SPDK_CONFIG_DEBUG 1 00:09:57.148 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:57.148 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:57.148 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:57.148 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:57.148 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:57.148 #undef SPDK_CONFIG_DPDK_UADK 00:09:57.148 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:57.148 #define SPDK_CONFIG_EXAMPLES 1 00:09:57.148 #undef SPDK_CONFIG_FC 00:09:57.148 #define SPDK_CONFIG_FC_PATH 00:09:57.148 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:57.148 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:57.148 #define SPDK_CONFIG_FSDEV 1 00:09:57.148 #undef SPDK_CONFIG_FUSE 00:09:57.148 #undef SPDK_CONFIG_FUZZER 00:09:57.148 #define SPDK_CONFIG_FUZZER_LIB 00:09:57.148 #undef SPDK_CONFIG_GOLANG 00:09:57.148 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:57.148 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:57.148 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:57.148 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:57.148 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:57.148 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:57.148 #undef SPDK_CONFIG_HAVE_LZ4 00:09:57.148 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:57.148 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:57.148 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:57.148 #define SPDK_CONFIG_IDXD 1 00:09:57.148 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:57.148 #undef SPDK_CONFIG_IPSEC_MB 00:09:57.148 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:57.148 #define SPDK_CONFIG_ISAL 1 00:09:57.148 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:57.148 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:57.148 #define SPDK_CONFIG_LIBDIR 00:09:57.148 #undef SPDK_CONFIG_LTO 00:09:57.148 #define SPDK_CONFIG_MAX_LCORES 128 00:09:57.148 #define SPDK_CONFIG_NVME_CUSE 1 00:09:57.148 #undef SPDK_CONFIG_OCF 00:09:57.148 #define SPDK_CONFIG_OCF_PATH 00:09:57.148 #define SPDK_CONFIG_OPENSSL_PATH 00:09:57.148 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:57.148 #define SPDK_CONFIG_PGO_DIR 00:09:57.148 #undef SPDK_CONFIG_PGO_USE 00:09:57.148 #define SPDK_CONFIG_PREFIX /usr/local 00:09:57.148 #undef SPDK_CONFIG_RAID5F 00:09:57.148 #undef SPDK_CONFIG_RBD 00:09:57.148 #define SPDK_CONFIG_RDMA 1 00:09:57.148 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:57.148 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:57.148 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:57.148 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:57.148 #define SPDK_CONFIG_SHARED 1 00:09:57.148 #undef SPDK_CONFIG_SMA 00:09:57.148 #define SPDK_CONFIG_TESTS 1 00:09:57.148 #undef SPDK_CONFIG_TSAN 00:09:57.148 #define SPDK_CONFIG_UBLK 1 00:09:57.148 #define SPDK_CONFIG_UBSAN 1 00:09:57.148 #undef SPDK_CONFIG_UNIT_TESTS 00:09:57.148 #undef SPDK_CONFIG_URING 00:09:57.148 #define SPDK_CONFIG_URING_PATH 00:09:57.148 #undef SPDK_CONFIG_URING_ZNS 00:09:57.148 #undef SPDK_CONFIG_USDT 00:09:57.148 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:57.148 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:57.148 #define SPDK_CONFIG_VFIO_USER 1 00:09:57.148 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:57.148 #define SPDK_CONFIG_VHOST 1 00:09:57.148 #define SPDK_CONFIG_VIRTIO 1 00:09:57.148 #undef SPDK_CONFIG_VTUNE 00:09:57.148 #define SPDK_CONFIG_VTUNE_DIR 00:09:57.148 #define SPDK_CONFIG_WERROR 1 00:09:57.148 #define SPDK_CONFIG_WPDK_DIR 00:09:57.148 #undef SPDK_CONFIG_XNVME 00:09:57.148 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.148 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:57.149 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:09:57.150 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3839143 ]] 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3839143 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.EK3Lfu 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.EK3Lfu/tests/target /tmp/spdk.EK3Lfu 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=785162240 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4499267584 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=119154057216 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356533760 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10202476544 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666898432 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678264832 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847939072 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871306752 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23367680 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=101376 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=402432 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677867520 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=401408 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:57.151 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:09:57.152 * Looking for test storage... 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=119154057216 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=12417069056 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:57.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.152 --rc genhtml_branch_coverage=1 00:09:57.152 --rc genhtml_function_coverage=1 00:09:57.152 --rc genhtml_legend=1 00:09:57.152 --rc geninfo_all_blocks=1 00:09:57.152 --rc geninfo_unexecuted_blocks=1 00:09:57.152 00:09:57.152 ' 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:57.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.152 --rc genhtml_branch_coverage=1 00:09:57.152 --rc genhtml_function_coverage=1 00:09:57.152 --rc genhtml_legend=1 00:09:57.152 --rc geninfo_all_blocks=1 00:09:57.152 --rc geninfo_unexecuted_blocks=1 00:09:57.152 00:09:57.152 ' 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:57.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.152 --rc genhtml_branch_coverage=1 00:09:57.152 --rc genhtml_function_coverage=1 00:09:57.152 --rc genhtml_legend=1 00:09:57.152 --rc geninfo_all_blocks=1 00:09:57.152 --rc geninfo_unexecuted_blocks=1 00:09:57.152 00:09:57.152 ' 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:57.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.152 --rc genhtml_branch_coverage=1 00:09:57.152 --rc genhtml_function_coverage=1 00:09:57.152 --rc genhtml_legend=1 00:09:57.152 --rc geninfo_all_blocks=1 00:09:57.152 --rc geninfo_unexecuted_blocks=1 00:09:57.152 00:09:57.152 ' 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.152 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:57.153 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.153 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.153 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.153 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.153 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.153 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.153 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.153 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.153 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.153 15:07:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.413 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:57.413 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:57.413 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.413 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.413 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.413 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.413 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.413 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.413 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.413 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.413 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.413 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.413 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.414 15:07:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:05.583 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:05.583 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:05.583 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:05.583 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # is_hw=yes 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:05.583 15:07:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:05.583 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:05.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:10:05.584 00:10:05.584 --- 10.0.0.2 ping statistics --- 00:10:05.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.584 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:05.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:10:05.584 00:10:05.584 --- 10.0.0.1 ping statistics --- 00:10:05.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.584 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # return 0 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:05.584 ************************************ 00:10:05.584 START TEST nvmf_filesystem_no_in_capsule 00:10:05.584 ************************************ 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=3842814 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 3842814 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3842814 ']' 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.584 15:07:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.584 [2024-10-01 15:07:14.400398] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:10:05.584 [2024-10-01 15:07:14.400458] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.584 [2024-10-01 15:07:14.470642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.584 [2024-10-01 15:07:14.545436] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.584 [2024-10-01 15:07:14.545476] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.584 [2024-10-01 15:07:14.545485] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.584 [2024-10-01 15:07:14.545491] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.584 [2024-10-01 15:07:14.545498] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.584 [2024-10-01 15:07:14.545638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.584 [2024-10-01 15:07:14.545768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.584 [2024-10-01 15:07:14.545927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.584 [2024-10-01 15:07:14.545928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.584 [2024-10-01 15:07:15.229409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.584 Malloc1 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.584 [2024-10-01 15:07:15.359075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.584 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:05.585 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.585 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:05.585 { 00:10:05.585 "name": "Malloc1", 00:10:05.585 "aliases": [ 00:10:05.585 "2814fcd0-e490-4bb5-9b71-2eea89031fe0" 00:10:05.585 ], 00:10:05.585 "product_name": "Malloc disk", 00:10:05.585 "block_size": 512, 00:10:05.585 "num_blocks": 1048576, 00:10:05.585 "uuid": "2814fcd0-e490-4bb5-9b71-2eea89031fe0", 00:10:05.585 "assigned_rate_limits": { 00:10:05.585 "rw_ios_per_sec": 0, 00:10:05.585 "rw_mbytes_per_sec": 0, 00:10:05.585 "r_mbytes_per_sec": 0, 00:10:05.585 "w_mbytes_per_sec": 0 00:10:05.585 }, 00:10:05.585 "claimed": true, 00:10:05.585 "claim_type": "exclusive_write", 00:10:05.585 "zoned": false, 00:10:05.585 "supported_io_types": { 00:10:05.585 "read": true, 00:10:05.585 "write": true, 00:10:05.585 "unmap": true, 00:10:05.585 "flush": true, 00:10:05.585 "reset": true, 00:10:05.585 "nvme_admin": false, 00:10:05.585 "nvme_io": false, 00:10:05.585 "nvme_io_md": false, 00:10:05.585 "write_zeroes": true, 00:10:05.585 "zcopy": true, 00:10:05.585 "get_zone_info": false, 00:10:05.585 "zone_management": false, 00:10:05.585 "zone_append": false, 00:10:05.585 "compare": false, 00:10:05.585 "compare_and_write": false, 00:10:05.585 "abort": true, 00:10:05.585 "seek_hole": false, 00:10:05.585 "seek_data": false, 00:10:05.585 "copy": true, 00:10:05.585 "nvme_iov_md": false 00:10:05.585 }, 00:10:05.585 "memory_domains": [ 00:10:05.585 { 00:10:05.585 "dma_device_id": "system", 00:10:05.585 "dma_device_type": 1 00:10:05.585 }, 00:10:05.585 { 00:10:05.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.585 "dma_device_type": 2 00:10:05.585 } 00:10:05.585 ], 00:10:05.585 "driver_specific": {} 00:10:05.585 } 00:10:05.585 ]' 00:10:05.585 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:05.585 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:05.585 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:05.845 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:05.845 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:05.845 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:05.845 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:05.845 15:07:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:07.229 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:07.229 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:07.229 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:07.229 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:07.229 15:07:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:09.139 15:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:09.139 15:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:09.139 15:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.400 15:07:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:09.400 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.400 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:09.400 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:09.400 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:09.400 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:09.400 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:09.400 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:09.400 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:09.400 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:09.400 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:09.400 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:09.400 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:09.400 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:09.662 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:10.234 15:07:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.177 ************************************ 00:10:11.177 START TEST filesystem_ext4 00:10:11.177 ************************************ 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:11.177 15:07:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:11.177 mke2fs 1.47.0 (5-Feb-2023) 00:10:11.177 Discarding device blocks: 0/522240 done 00:10:11.177 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:11.177 Filesystem UUID: e204ecce-2220-4553-b838-40f22c068e79 00:10:11.177 Superblock backups stored on blocks: 00:10:11.177 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:11.177 00:10:11.177 Allocating group tables: 0/64 done 00:10:11.177 Writing inode tables: 0/64 done 00:10:11.438 Creating journal (8192 blocks): done 00:10:11.438 Writing superblocks and filesystem accounting information: 0/64 done 00:10:11.438 00:10:11.438 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:11.438 15:07:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3842814 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:18.020 00:10:18.020 real 0m6.011s 00:10:18.020 user 0m0.036s 00:10:18.020 sys 0m0.072s 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:18.020 ************************************ 00:10:18.020 END TEST filesystem_ext4 00:10:18.020 ************************************ 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.020 ************************************ 00:10:18.020 START TEST filesystem_btrfs 00:10:18.020 ************************************ 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:18.020 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:18.021 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:18.021 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:18.021 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:18.021 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:18.021 15:07:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:18.021 btrfs-progs v6.8.1 00:10:18.021 See https://btrfs.readthedocs.io for more information. 00:10:18.021 00:10:18.021 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:18.021 NOTE: several default settings have changed in version 5.15, please make sure 00:10:18.021 this does not affect your deployments: 00:10:18.021 - DUP for metadata (-m dup) 00:10:18.021 - enabled no-holes (-O no-holes) 00:10:18.021 - enabled free-space-tree (-R free-space-tree) 00:10:18.021 00:10:18.021 Label: (null) 00:10:18.021 UUID: d0a9a946-675e-445f-b693-768a63032a3f 00:10:18.021 Node size: 16384 00:10:18.021 Sector size: 4096 (CPU page size: 4096) 00:10:18.021 Filesystem size: 510.00MiB 00:10:18.021 Block group profiles: 00:10:18.021 Data: single 8.00MiB 00:10:18.021 Metadata: DUP 32.00MiB 00:10:18.021 System: DUP 8.00MiB 00:10:18.021 SSD detected: yes 00:10:18.021 Zoned device: no 00:10:18.021 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:18.021 Checksum: crc32c 00:10:18.021 Number of devices: 1 00:10:18.021 Devices: 00:10:18.021 ID SIZE PATH 00:10:18.021 1 510.00MiB /dev/nvme0n1p1 00:10:18.021 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3842814 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:18.021 00:10:18.021 real 0m0.904s 00:10:18.021 user 0m0.025s 00:10:18.021 sys 0m0.124s 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:18.021 ************************************ 00:10:18.021 END TEST filesystem_btrfs 00:10:18.021 ************************************ 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:18.021 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.282 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.282 ************************************ 00:10:18.282 START TEST filesystem_xfs 00:10:18.282 ************************************ 00:10:18.282 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:18.282 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:18.282 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:18.282 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:18.282 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:18.282 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:18.282 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:18.282 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:18.282 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:18.282 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:18.282 15:07:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:18.282 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:18.282 = sectsz=512 attr=2, projid32bit=1 00:10:18.282 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:18.282 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:18.282 data = bsize=4096 blocks=130560, imaxpct=25 00:10:18.282 = sunit=0 swidth=0 blks 00:10:18.282 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:18.282 log =internal log bsize=4096 blocks=16384, version=2 00:10:18.282 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:18.282 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:19.223 Discarding blocks...Done. 00:10:19.223 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:19.223 15:07:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3842814 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:21.133 00:10:21.133 real 0m2.901s 00:10:21.133 user 0m0.025s 00:10:21.133 sys 0m0.081s 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:21.133 ************************************ 00:10:21.133 END TEST filesystem_xfs 00:10:21.133 ************************************ 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:21.133 15:07:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3842814 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3842814 ']' 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3842814 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3842814 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3842814' 00:10:21.394 killing process with pid 3842814 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3842814 00:10:21.394 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3842814 00:10:21.654 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:21.654 00:10:21.654 real 0m17.141s 00:10:21.654 user 1m7.583s 00:10:21.654 sys 0m1.387s 00:10:21.654 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.654 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.654 ************************************ 00:10:21.654 END TEST nvmf_filesystem_no_in_capsule 00:10:21.654 ************************************ 00:10:21.654 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:21.654 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.654 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.654 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:21.914 ************************************ 00:10:21.914 START TEST nvmf_filesystem_in_capsule 00:10:21.914 ************************************ 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # nvmfpid=3846405 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # waitforlisten 3846405 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3846405 ']' 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.914 15:07:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.914 [2024-10-01 15:07:31.604752] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:10:21.914 [2024-10-01 15:07:31.604805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.914 [2024-10-01 15:07:31.674036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.914 [2024-10-01 15:07:31.748408] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.914 [2024-10-01 15:07:31.748447] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.914 [2024-10-01 15:07:31.748455] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.914 [2024-10-01 15:07:31.748461] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.914 [2024-10-01 15:07:31.748467] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.914 [2024-10-01 15:07:31.748607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.914 [2024-10-01 15:07:31.748729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.914 [2024-10-01 15:07:31.748884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.914 [2024-10-01 15:07:31.748885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.855 [2024-10-01 15:07:32.457122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.855 Malloc1 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.855 [2024-10-01 15:07:32.585068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:22.855 { 00:10:22.855 "name": "Malloc1", 00:10:22.855 "aliases": [ 00:10:22.855 "fa48d1dc-168e-4375-8454-27e195e97ab4" 00:10:22.855 ], 00:10:22.855 "product_name": "Malloc disk", 00:10:22.855 "block_size": 512, 00:10:22.855 "num_blocks": 1048576, 00:10:22.855 "uuid": "fa48d1dc-168e-4375-8454-27e195e97ab4", 00:10:22.855 "assigned_rate_limits": { 00:10:22.855 "rw_ios_per_sec": 0, 00:10:22.855 "rw_mbytes_per_sec": 0, 00:10:22.855 "r_mbytes_per_sec": 0, 00:10:22.855 "w_mbytes_per_sec": 0 00:10:22.855 }, 00:10:22.855 "claimed": true, 00:10:22.855 "claim_type": "exclusive_write", 00:10:22.855 "zoned": false, 00:10:22.855 "supported_io_types": { 00:10:22.855 "read": true, 00:10:22.855 "write": true, 00:10:22.855 "unmap": true, 00:10:22.855 "flush": true, 00:10:22.855 "reset": true, 00:10:22.855 "nvme_admin": false, 00:10:22.855 "nvme_io": false, 00:10:22.855 "nvme_io_md": false, 00:10:22.855 "write_zeroes": true, 00:10:22.855 "zcopy": true, 00:10:22.855 "get_zone_info": false, 00:10:22.855 "zone_management": false, 00:10:22.855 "zone_append": false, 00:10:22.855 "compare": false, 00:10:22.855 "compare_and_write": false, 00:10:22.855 "abort": true, 00:10:22.855 "seek_hole": false, 00:10:22.855 "seek_data": false, 00:10:22.855 "copy": true, 00:10:22.855 "nvme_iov_md": false 00:10:22.855 }, 00:10:22.855 "memory_domains": [ 00:10:22.855 { 00:10:22.855 "dma_device_id": "system", 00:10:22.855 "dma_device_type": 1 00:10:22.855 }, 00:10:22.855 { 00:10:22.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.855 "dma_device_type": 2 00:10:22.855 } 00:10:22.855 ], 00:10:22.855 "driver_specific": {} 00:10:22.855 } 00:10:22.855 ]' 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:22.855 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:22.856 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:22.856 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:22.856 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:22.856 15:07:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:24.766 15:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:24.766 15:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:24.766 15:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.766 15:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:24.766 15:07:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:26.677 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:26.938 15:07:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:27.880 15:07:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.823 ************************************ 00:10:28.823 START TEST filesystem_in_capsule_ext4 00:10:28.823 ************************************ 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:28.823 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:28.823 mke2fs 1.47.0 (5-Feb-2023) 00:10:28.823 Discarding device blocks: 0/522240 done 00:10:28.823 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:28.823 Filesystem UUID: afb9016d-454d-4d8b-958d-86f52a38ce4c 00:10:28.823 Superblock backups stored on blocks: 00:10:28.824 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:28.824 00:10:28.824 Allocating group tables: 0/64 done 00:10:28.824 Writing inode tables: 0/64 done 00:10:29.084 Creating journal (8192 blocks): done 00:10:29.084 Writing superblocks and filesystem accounting information: 0/64 done 00:10:29.084 00:10:29.084 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:29.084 15:07:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3846405 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:35.670 00:10:35.670 real 0m6.021s 00:10:35.670 user 0m0.022s 00:10:35.670 sys 0m0.084s 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:35.670 ************************************ 00:10:35.670 END TEST filesystem_in_capsule_ext4 00:10:35.670 ************************************ 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.670 ************************************ 00:10:35.670 START TEST filesystem_in_capsule_btrfs 00:10:35.670 ************************************ 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:35.670 15:07:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:35.670 btrfs-progs v6.8.1 00:10:35.670 See https://btrfs.readthedocs.io for more information. 00:10:35.670 00:10:35.670 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:35.670 NOTE: several default settings have changed in version 5.15, please make sure 00:10:35.670 this does not affect your deployments: 00:10:35.670 - DUP for metadata (-m dup) 00:10:35.670 - enabled no-holes (-O no-holes) 00:10:35.670 - enabled free-space-tree (-R free-space-tree) 00:10:35.670 00:10:35.670 Label: (null) 00:10:35.670 UUID: 183c7f62-5e6b-4179-ad16-3b9c57c6b076 00:10:35.670 Node size: 16384 00:10:35.670 Sector size: 4096 (CPU page size: 4096) 00:10:35.670 Filesystem size: 510.00MiB 00:10:35.670 Block group profiles: 00:10:35.670 Data: single 8.00MiB 00:10:35.670 Metadata: DUP 32.00MiB 00:10:35.670 System: DUP 8.00MiB 00:10:35.670 SSD detected: yes 00:10:35.670 Zoned device: no 00:10:35.670 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:35.670 Checksum: crc32c 00:10:35.670 Number of devices: 1 00:10:35.670 Devices: 00:10:35.670 ID SIZE PATH 00:10:35.670 1 510.00MiB /dev/nvme0n1p1 00:10:35.670 00:10:35.671 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:35.671 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:35.932 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:35.932 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:35.932 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:35.932 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:35.932 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:35.932 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:35.932 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3846405 00:10:35.932 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:35.932 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:35.932 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:35.932 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:36.193 00:10:36.193 real 0m1.142s 00:10:36.193 user 0m0.026s 00:10:36.193 sys 0m0.121s 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:36.193 ************************************ 00:10:36.193 END TEST filesystem_in_capsule_btrfs 00:10:36.193 ************************************ 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.193 ************************************ 00:10:36.193 START TEST filesystem_in_capsule_xfs 00:10:36.193 ************************************ 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:36.193 15:07:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:36.193 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:36.193 = sectsz=512 attr=2, projid32bit=1 00:10:36.193 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:36.193 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:36.193 data = bsize=4096 blocks=130560, imaxpct=25 00:10:36.193 = sunit=0 swidth=0 blks 00:10:36.193 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:36.193 log =internal log bsize=4096 blocks=16384, version=2 00:10:36.193 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:36.193 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:37.134 Discarding blocks...Done. 00:10:37.134 15:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:37.134 15:07:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:39.044 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:39.044 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:39.044 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:39.044 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:39.044 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:39.044 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:39.044 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3846405 00:10:39.044 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:39.044 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:39.044 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:39.044 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:39.044 00:10:39.044 real 0m2.906s 00:10:39.044 user 0m0.026s 00:10:39.044 sys 0m0.076s 00:10:39.044 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.044 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:39.044 ************************************ 00:10:39.044 END TEST filesystem_in_capsule_xfs 00:10:39.045 ************************************ 00:10:39.045 15:07:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:39.305 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:39.305 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:39.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3846405 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3846405 ']' 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3846405 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3846405 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3846405' 00:10:39.566 killing process with pid 3846405 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3846405 00:10:39.566 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3846405 00:10:39.827 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:39.827 00:10:39.827 real 0m18.059s 00:10:39.827 user 1m11.303s 00:10:39.827 sys 0m1.401s 00:10:39.827 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.827 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.827 ************************************ 00:10:39.827 END TEST nvmf_filesystem_in_capsule 00:10:39.827 ************************************ 00:10:39.827 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:39.827 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:39.827 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:39.827 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.827 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:39.827 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.827 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.827 rmmod nvme_tcp 00:10:39.827 rmmod nvme_fabrics 00:10:39.827 rmmod nvme_keyring 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-save 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@787 -- # iptables-restore 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.088 15:07:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.083 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:42.083 00:10:42.083 real 0m45.264s 00:10:42.083 user 2m21.179s 00:10:42.083 sys 0m8.526s 00:10:42.083 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.083 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:42.083 ************************************ 00:10:42.083 END TEST nvmf_filesystem 00:10:42.083 ************************************ 00:10:42.083 15:07:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:42.083 15:07:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:42.083 15:07:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.083 15:07:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:42.083 ************************************ 00:10:42.083 START TEST nvmf_target_discovery 00:10:42.083 ************************************ 00:10:42.083 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:42.355 * Looking for test storage... 00:10:42.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.355 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:42.355 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:10:42.355 15:07:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.355 --rc genhtml_branch_coverage=1 00:10:42.355 --rc genhtml_function_coverage=1 00:10:42.355 --rc genhtml_legend=1 00:10:42.355 --rc geninfo_all_blocks=1 00:10:42.355 --rc geninfo_unexecuted_blocks=1 00:10:42.355 00:10:42.355 ' 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.355 --rc genhtml_branch_coverage=1 00:10:42.355 --rc genhtml_function_coverage=1 00:10:42.355 --rc genhtml_legend=1 00:10:42.355 --rc geninfo_all_blocks=1 00:10:42.355 --rc geninfo_unexecuted_blocks=1 00:10:42.355 00:10:42.355 ' 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.355 --rc genhtml_branch_coverage=1 00:10:42.355 --rc genhtml_function_coverage=1 00:10:42.355 --rc genhtml_legend=1 00:10:42.355 --rc geninfo_all_blocks=1 00:10:42.355 --rc geninfo_unexecuted_blocks=1 00:10:42.355 00:10:42.355 ' 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:42.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.355 --rc genhtml_branch_coverage=1 00:10:42.355 --rc genhtml_function_coverage=1 00:10:42.355 --rc genhtml_legend=1 00:10:42.355 --rc geninfo_all_blocks=1 00:10:42.355 --rc geninfo_unexecuted_blocks=1 00:10:42.355 00:10:42.355 ' 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.355 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:42.356 15:07:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:50.499 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:50.499 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:50.499 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:50.499 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:10:50.499 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:10:50.500 00:10:50.500 --- 10.0.0.2 ping statistics --- 00:10:50.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.500 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:10:50.500 00:10:50.500 --- 10.0.0.1 ping statistics --- 00:10:50.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.500 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # return 0 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # nvmfpid=3854317 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # waitforlisten 3854317 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3854317 ']' 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.500 15:07:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.500 [2024-10-01 15:07:59.572927] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:10:50.500 [2024-10-01 15:07:59.573002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.500 [2024-10-01 15:07:59.647268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.500 [2024-10-01 15:07:59.720867] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.500 [2024-10-01 15:07:59.720920] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.500 [2024-10-01 15:07:59.720929] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.500 [2024-10-01 15:07:59.720935] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.500 [2024-10-01 15:07:59.720941] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.500 [2024-10-01 15:07:59.721034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.500 [2024-10-01 15:07:59.721243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.500 [2024-10-01 15:07:59.721400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.500 [2024-10-01 15:07:59.721400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.761 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.761 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:10:50.761 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:50.761 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:50.761 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.761 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.761 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.761 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.761 [2024-10-01 15:08:00.428206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.761 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.761 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 Null1 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 [2024-10-01 15:08:00.488550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 Null2 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 Null3 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 Null4 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.762 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 4420 00:10:51.023 00:10:51.023 Discovery Log Number of Records 6, Generation counter 6 00:10:51.023 =====Discovery Log Entry 0====== 00:10:51.023 trtype: tcp 00:10:51.023 adrfam: ipv4 00:10:51.023 subtype: current discovery subsystem 00:10:51.023 treq: not required 00:10:51.023 portid: 0 00:10:51.023 trsvcid: 4420 00:10:51.023 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:51.023 traddr: 10.0.0.2 00:10:51.023 eflags: explicit discovery connections, duplicate discovery information 00:10:51.023 sectype: none 00:10:51.023 =====Discovery Log Entry 1====== 00:10:51.023 trtype: tcp 00:10:51.023 adrfam: ipv4 00:10:51.023 subtype: nvme subsystem 00:10:51.023 treq: not required 00:10:51.023 portid: 0 00:10:51.023 trsvcid: 4420 00:10:51.023 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:51.023 traddr: 10.0.0.2 00:10:51.023 eflags: none 00:10:51.023 sectype: none 00:10:51.023 =====Discovery Log Entry 2====== 00:10:51.023 trtype: tcp 00:10:51.023 adrfam: ipv4 00:10:51.023 subtype: nvme subsystem 00:10:51.023 treq: not required 00:10:51.023 portid: 0 00:10:51.023 trsvcid: 4420 00:10:51.023 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:51.023 traddr: 10.0.0.2 00:10:51.023 eflags: none 00:10:51.023 sectype: none 00:10:51.023 =====Discovery Log Entry 3====== 00:10:51.023 trtype: tcp 00:10:51.023 adrfam: ipv4 00:10:51.023 subtype: nvme subsystem 00:10:51.023 treq: not required 00:10:51.023 portid: 0 00:10:51.023 trsvcid: 4420 00:10:51.023 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:51.023 traddr: 10.0.0.2 00:10:51.023 eflags: none 00:10:51.023 sectype: none 00:10:51.023 =====Discovery Log Entry 4====== 00:10:51.023 trtype: tcp 00:10:51.023 adrfam: ipv4 00:10:51.023 subtype: nvme subsystem 00:10:51.023 treq: not required 00:10:51.023 portid: 0 00:10:51.023 trsvcid: 4420 00:10:51.023 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:51.023 traddr: 10.0.0.2 00:10:51.023 eflags: none 00:10:51.023 sectype: none 00:10:51.023 =====Discovery Log Entry 5====== 00:10:51.023 trtype: tcp 00:10:51.023 adrfam: ipv4 00:10:51.023 subtype: discovery subsystem referral 00:10:51.023 treq: not required 00:10:51.023 portid: 0 00:10:51.023 trsvcid: 4430 00:10:51.023 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:51.023 traddr: 10.0.0.2 00:10:51.023 eflags: none 00:10:51.023 sectype: none 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:51.023 Perform nvmf subsystem discovery via RPC 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:51.023 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.024 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.285 [ 00:10:51.285 { 00:10:51.285 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:51.285 "subtype": "Discovery", 00:10:51.285 "listen_addresses": [ 00:10:51.285 { 00:10:51.285 "trtype": "TCP", 00:10:51.285 "adrfam": "IPv4", 00:10:51.285 "traddr": "10.0.0.2", 00:10:51.285 "trsvcid": "4420" 00:10:51.285 } 00:10:51.285 ], 00:10:51.285 "allow_any_host": true, 00:10:51.285 "hosts": [] 00:10:51.285 }, 00:10:51.285 { 00:10:51.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.285 "subtype": "NVMe", 00:10:51.285 "listen_addresses": [ 00:10:51.285 { 00:10:51.285 "trtype": "TCP", 00:10:51.285 "adrfam": "IPv4", 00:10:51.285 "traddr": "10.0.0.2", 00:10:51.285 "trsvcid": "4420" 00:10:51.285 } 00:10:51.285 ], 00:10:51.285 "allow_any_host": true, 00:10:51.285 "hosts": [], 00:10:51.285 "serial_number": "SPDK00000000000001", 00:10:51.285 "model_number": "SPDK bdev Controller", 00:10:51.285 "max_namespaces": 32, 00:10:51.285 "min_cntlid": 1, 00:10:51.285 "max_cntlid": 65519, 00:10:51.285 "namespaces": [ 00:10:51.285 { 00:10:51.285 "nsid": 1, 00:10:51.285 "bdev_name": "Null1", 00:10:51.285 "name": "Null1", 00:10:51.285 "nguid": "DE5A33A76A4D49EAABBA94FBB3C90EB2", 00:10:51.285 "uuid": "de5a33a7-6a4d-49ea-abba-94fbb3c90eb2" 00:10:51.285 } 00:10:51.285 ] 00:10:51.285 }, 00:10:51.285 { 00:10:51.285 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:51.285 "subtype": "NVMe", 00:10:51.285 "listen_addresses": [ 00:10:51.285 { 00:10:51.285 "trtype": "TCP", 00:10:51.285 "adrfam": "IPv4", 00:10:51.285 "traddr": "10.0.0.2", 00:10:51.285 "trsvcid": "4420" 00:10:51.285 } 00:10:51.285 ], 00:10:51.285 "allow_any_host": true, 00:10:51.285 "hosts": [], 00:10:51.285 "serial_number": "SPDK00000000000002", 00:10:51.285 "model_number": "SPDK bdev Controller", 00:10:51.285 "max_namespaces": 32, 00:10:51.285 "min_cntlid": 1, 00:10:51.285 "max_cntlid": 65519, 00:10:51.285 "namespaces": [ 00:10:51.285 { 00:10:51.285 "nsid": 1, 00:10:51.285 "bdev_name": "Null2", 00:10:51.285 "name": "Null2", 00:10:51.285 "nguid": "A56726206F40458DA2E14C8F4E6A27DD", 00:10:51.285 "uuid": "a5672620-6f40-458d-a2e1-4c8f4e6a27dd" 00:10:51.285 } 00:10:51.285 ] 00:10:51.285 }, 00:10:51.285 { 00:10:51.285 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:51.285 "subtype": "NVMe", 00:10:51.285 "listen_addresses": [ 00:10:51.285 { 00:10:51.285 "trtype": "TCP", 00:10:51.285 "adrfam": "IPv4", 00:10:51.285 "traddr": "10.0.0.2", 00:10:51.285 "trsvcid": "4420" 00:10:51.285 } 00:10:51.285 ], 00:10:51.285 "allow_any_host": true, 00:10:51.285 "hosts": [], 00:10:51.285 "serial_number": "SPDK00000000000003", 00:10:51.285 "model_number": "SPDK bdev Controller", 00:10:51.285 "max_namespaces": 32, 00:10:51.285 "min_cntlid": 1, 00:10:51.285 "max_cntlid": 65519, 00:10:51.285 "namespaces": [ 00:10:51.285 { 00:10:51.285 "nsid": 1, 00:10:51.285 "bdev_name": "Null3", 00:10:51.285 "name": "Null3", 00:10:51.285 "nguid": "9B11CD0D652942CE9385453D9F6478C8", 00:10:51.285 "uuid": "9b11cd0d-6529-42ce-9385-453d9f6478c8" 00:10:51.285 } 00:10:51.285 ] 00:10:51.285 }, 00:10:51.285 { 00:10:51.285 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:51.285 "subtype": "NVMe", 00:10:51.285 "listen_addresses": [ 00:10:51.285 { 00:10:51.285 "trtype": "TCP", 00:10:51.285 "adrfam": "IPv4", 00:10:51.285 "traddr": "10.0.0.2", 00:10:51.285 "trsvcid": "4420" 00:10:51.285 } 00:10:51.285 ], 00:10:51.285 "allow_any_host": true, 00:10:51.285 "hosts": [], 00:10:51.285 "serial_number": "SPDK00000000000004", 00:10:51.285 "model_number": "SPDK bdev Controller", 00:10:51.285 "max_namespaces": 32, 00:10:51.285 "min_cntlid": 1, 00:10:51.285 "max_cntlid": 65519, 00:10:51.285 "namespaces": [ 00:10:51.285 { 00:10:51.285 "nsid": 1, 00:10:51.285 "bdev_name": "Null4", 00:10:51.285 "name": "Null4", 00:10:51.285 "nguid": "1DC22BC01C24420B920A259E512EE817", 00:10:51.285 "uuid": "1dc22bc0-1c24-420b-920a-259e512ee817" 00:10:51.285 } 00:10:51.285 ] 00:10:51.285 } 00:10:51.285 ] 00:10:51.285 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.285 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:51.285 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.285 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.285 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.285 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.286 15:08:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.286 rmmod nvme_tcp 00:10:51.286 rmmod nvme_fabrics 00:10:51.286 rmmod nvme_keyring 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@513 -- # '[' -n 3854317 ']' 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # killprocess 3854317 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3854317 ']' 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3854317 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.286 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3854317 00:10:51.546 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.546 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.546 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3854317' 00:10:51.546 killing process with pid 3854317 00:10:51.546 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3854317 00:10:51.546 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3854317 00:10:51.546 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:51.546 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:51.546 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:51.546 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:51.546 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-save 00:10:51.547 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:51.547 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:10:51.547 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.547 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:51.547 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.547 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.547 15:08:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.088 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.088 00:10:54.088 real 0m11.538s 00:10:54.088 user 0m8.738s 00:10:54.088 sys 0m6.007s 00:10:54.088 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.088 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:54.088 ************************************ 00:10:54.088 END TEST nvmf_target_discovery 00:10:54.088 ************************************ 00:10:54.088 15:08:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:54.088 15:08:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:54.088 15:08:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.088 15:08:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:54.089 ************************************ 00:10:54.089 START TEST nvmf_referrals 00:10:54.089 ************************************ 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:54.089 * Looking for test storage... 00:10:54.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:54.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.089 --rc genhtml_branch_coverage=1 00:10:54.089 --rc genhtml_function_coverage=1 00:10:54.089 --rc genhtml_legend=1 00:10:54.089 --rc geninfo_all_blocks=1 00:10:54.089 --rc geninfo_unexecuted_blocks=1 00:10:54.089 00:10:54.089 ' 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:54.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.089 --rc genhtml_branch_coverage=1 00:10:54.089 --rc genhtml_function_coverage=1 00:10:54.089 --rc genhtml_legend=1 00:10:54.089 --rc geninfo_all_blocks=1 00:10:54.089 --rc geninfo_unexecuted_blocks=1 00:10:54.089 00:10:54.089 ' 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:54.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.089 --rc genhtml_branch_coverage=1 00:10:54.089 --rc genhtml_function_coverage=1 00:10:54.089 --rc genhtml_legend=1 00:10:54.089 --rc geninfo_all_blocks=1 00:10:54.089 --rc geninfo_unexecuted_blocks=1 00:10:54.089 00:10:54.089 ' 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:54.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.089 --rc genhtml_branch_coverage=1 00:10:54.089 --rc genhtml_function_coverage=1 00:10:54.089 --rc genhtml_legend=1 00:10:54.089 --rc geninfo_all_blocks=1 00:10:54.089 --rc geninfo_unexecuted_blocks=1 00:10:54.089 00:10:54.089 ' 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.089 15:08:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.730 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:00.731 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:00.731 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:00.731 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:00.731 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # is_hw=yes 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:00.731 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:00.732 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.733 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:00.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:11:00.994 00:11:00.994 --- 10.0.0.2 ping statistics --- 00:11:00.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.994 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:11:00.994 00:11:00.994 --- 10.0.0.1 ping statistics --- 00:11:00.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.994 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # return 0 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:00.994 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:01.255 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:01.255 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:01.255 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.255 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.255 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # nvmfpid=3858832 00:11:01.255 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # waitforlisten 3858832 00:11:01.255 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.255 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3858832 ']' 00:11:01.255 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.255 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.255 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.255 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.255 15:08:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.255 [2024-10-01 15:08:10.954723] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:11:01.255 [2024-10-01 15:08:10.954792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.255 [2024-10-01 15:08:11.030831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.255 [2024-10-01 15:08:11.107138] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.255 [2024-10-01 15:08:11.107179] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.255 [2024-10-01 15:08:11.107187] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.255 [2024-10-01 15:08:11.107194] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.255 [2024-10-01 15:08:11.107200] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.255 [2024-10-01 15:08:11.107338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.255 [2024-10-01 15:08:11.107478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.255 [2024-10-01 15:08:11.107635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.255 [2024-10-01 15:08:11.107636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.196 [2024-10-01 15:08:11.812250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.196 [2024-10-01 15:08:11.828466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:02.196 15:08:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.457 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:02.458 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:02.458 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:02.458 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:02.458 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:02.458 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:02.458 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:02.719 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:02.981 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:02.981 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:02.981 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:02.981 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:02.981 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:02.981 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:02.981 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:03.242 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:03.242 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:03.242 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:03.242 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:03.242 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:03.242 15:08:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:03.503 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:03.504 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:03.504 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:03.764 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:03.764 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:03.764 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:03.764 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:03.764 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:03.764 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:04.025 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:04.286 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:04.286 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:04.286 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:04.286 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:04.286 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:04.286 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:04.286 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:04.286 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:04.286 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:04.286 15:08:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:04.286 rmmod nvme_tcp 00:11:04.286 rmmod nvme_fabrics 00:11:04.286 rmmod nvme_keyring 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@513 -- # '[' -n 3858832 ']' 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # killprocess 3858832 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3858832 ']' 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3858832 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3858832 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3858832' 00:11:04.286 killing process with pid 3858832 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3858832 00:11:04.286 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3858832 00:11:04.547 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:04.547 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:04.547 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:04.547 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:04.547 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-save 00:11:04.547 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:04.547 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@787 -- # iptables-restore 00:11:04.547 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:04.547 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:04.547 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.547 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.547 15:08:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:07.094 00:11:07.094 real 0m12.837s 00:11:07.094 user 0m15.392s 00:11:07.094 sys 0m6.290s 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.094 ************************************ 00:11:07.094 END TEST nvmf_referrals 00:11:07.094 ************************************ 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:07.094 ************************************ 00:11:07.094 START TEST nvmf_connect_disconnect 00:11:07.094 ************************************ 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:07.094 * Looking for test storage... 00:11:07.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:07.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.094 --rc genhtml_branch_coverage=1 00:11:07.094 --rc genhtml_function_coverage=1 00:11:07.094 --rc genhtml_legend=1 00:11:07.094 --rc geninfo_all_blocks=1 00:11:07.094 --rc geninfo_unexecuted_blocks=1 00:11:07.094 00:11:07.094 ' 00:11:07.094 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:07.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.094 --rc genhtml_branch_coverage=1 00:11:07.094 --rc genhtml_function_coverage=1 00:11:07.094 --rc genhtml_legend=1 00:11:07.094 --rc geninfo_all_blocks=1 00:11:07.094 --rc geninfo_unexecuted_blocks=1 00:11:07.095 00:11:07.095 ' 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:07.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.095 --rc genhtml_branch_coverage=1 00:11:07.095 --rc genhtml_function_coverage=1 00:11:07.095 --rc genhtml_legend=1 00:11:07.095 --rc geninfo_all_blocks=1 00:11:07.095 --rc geninfo_unexecuted_blocks=1 00:11:07.095 00:11:07.095 ' 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:07.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.095 --rc genhtml_branch_coverage=1 00:11:07.095 --rc genhtml_function_coverage=1 00:11:07.095 --rc genhtml_legend=1 00:11:07.095 --rc geninfo_all_blocks=1 00:11:07.095 --rc geninfo_unexecuted_blocks=1 00:11:07.095 00:11:07.095 ' 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:07.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:07.095 15:08:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.683 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:13.684 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:13.684 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:13.684 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:13.684 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # is_hw=yes 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.684 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:13.945 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:13.945 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.945 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.945 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.945 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.945 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:13.945 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:14.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:11:14.206 00:11:14.206 --- 10.0.0.2 ping statistics --- 00:11:14.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.206 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:11:14.206 00:11:14.206 --- 10.0.0.1 ping statistics --- 00:11:14.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.206 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # return 0 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # nvmfpid=3863784 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # waitforlisten 3863784 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3863784 ']' 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.206 15:08:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:14.206 [2024-10-01 15:08:23.976499] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:11:14.206 [2024-10-01 15:08:23.976550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.206 [2024-10-01 15:08:24.044873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.467 [2024-10-01 15:08:24.110060] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.467 [2024-10-01 15:08:24.110100] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.467 [2024-10-01 15:08:24.110108] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.467 [2024-10-01 15:08:24.110115] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.467 [2024-10-01 15:08:24.110121] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.467 [2024-10-01 15:08:24.110269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.467 [2024-10-01 15:08:24.110445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.467 [2024-10-01 15:08:24.110605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.467 [2024-10-01 15:08:24.110605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:14.467 [2024-10-01 15:08:24.250568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:14.467 [2024-10-01 15:08:24.309886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:14.467 15:08:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:18.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.761 rmmod nvme_tcp 00:11:32.761 rmmod nvme_fabrics 00:11:32.761 rmmod nvme_keyring 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@513 -- # '[' -n 3863784 ']' 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # killprocess 3863784 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3863784 ']' 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3863784 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3863784 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3863784' 00:11:32.761 killing process with pid 3863784 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3863784 00:11:32.761 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3863784 00:11:33.022 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:33.022 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:33.022 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:33.022 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:33.022 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-save 00:11:33.022 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:33.022 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@787 -- # iptables-restore 00:11:33.022 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.022 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.022 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.022 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.022 15:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.569 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.569 00:11:35.569 real 0m28.411s 00:11:35.569 user 1m16.141s 00:11:35.569 sys 0m6.914s 00:11:35.569 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:35.569 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.569 ************************************ 00:11:35.569 END TEST nvmf_connect_disconnect 00:11:35.569 ************************************ 00:11:35.569 15:08:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:35.569 15:08:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:35.569 15:08:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.569 15:08:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.569 ************************************ 00:11:35.569 START TEST nvmf_multitarget 00:11:35.569 ************************************ 00:11:35.569 15:08:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:35.569 * Looking for test storage... 00:11:35.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:35.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.569 --rc genhtml_branch_coverage=1 00:11:35.569 --rc genhtml_function_coverage=1 00:11:35.569 --rc genhtml_legend=1 00:11:35.569 --rc geninfo_all_blocks=1 00:11:35.569 --rc geninfo_unexecuted_blocks=1 00:11:35.569 00:11:35.569 ' 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:35.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.569 --rc genhtml_branch_coverage=1 00:11:35.569 --rc genhtml_function_coverage=1 00:11:35.569 --rc genhtml_legend=1 00:11:35.569 --rc geninfo_all_blocks=1 00:11:35.569 --rc geninfo_unexecuted_blocks=1 00:11:35.569 00:11:35.569 ' 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:35.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.569 --rc genhtml_branch_coverage=1 00:11:35.569 --rc genhtml_function_coverage=1 00:11:35.569 --rc genhtml_legend=1 00:11:35.569 --rc geninfo_all_blocks=1 00:11:35.569 --rc geninfo_unexecuted_blocks=1 00:11:35.569 00:11:35.569 ' 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:35.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.569 --rc genhtml_branch_coverage=1 00:11:35.569 --rc genhtml_function_coverage=1 00:11:35.569 --rc genhtml_legend=1 00:11:35.569 --rc geninfo_all_blocks=1 00:11:35.569 --rc geninfo_unexecuted_blocks=1 00:11:35.569 00:11:35.569 ' 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.569 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.570 15:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:43.710 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:43.710 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:43.710 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:43.710 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # is_hw=yes 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.710 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:11:43.711 00:11:43.711 --- 10.0.0.2 ping statistics --- 00:11:43.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.711 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:11:43.711 00:11:43.711 --- 10.0.0.1 ping statistics --- 00:11:43.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.711 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # return 0 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # nvmfpid=3871578 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # waitforlisten 3871578 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3871578 ']' 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.711 15:08:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:43.711 [2024-10-01 15:08:52.536161] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:11:43.711 [2024-10-01 15:08:52.536229] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.711 [2024-10-01 15:08:52.606980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.711 [2024-10-01 15:08:52.682926] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.711 [2024-10-01 15:08:52.682965] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.711 [2024-10-01 15:08:52.682973] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.711 [2024-10-01 15:08:52.682982] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.711 [2024-10-01 15:08:52.682988] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.711 [2024-10-01 15:08:52.683082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.711 [2024-10-01 15:08:52.683333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.711 [2024-10-01 15:08:52.683489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.711 [2024-10-01 15:08:52.683489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.711 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:43.711 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:11:43.711 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:43.711 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.711 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:43.711 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.711 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:43.711 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:43.711 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:43.711 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:43.711 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:43.971 "nvmf_tgt_1" 00:11:43.971 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:43.971 "nvmf_tgt_2" 00:11:43.971 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:43.971 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:43.971 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:43.972 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:44.232 true 00:11:44.232 15:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:44.232 true 00:11:44.232 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:44.232 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:44.492 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:44.492 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:44.492 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:44.492 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:44.492 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:44.492 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:44.492 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:44.492 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:44.493 rmmod nvme_tcp 00:11:44.493 rmmod nvme_fabrics 00:11:44.493 rmmod nvme_keyring 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@513 -- # '[' -n 3871578 ']' 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # killprocess 3871578 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3871578 ']' 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3871578 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3871578 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3871578' 00:11:44.493 killing process with pid 3871578 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3871578 00:11:44.493 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3871578 00:11:44.753 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:44.753 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:44.753 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:44.753 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:44.753 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-save 00:11:44.753 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:44.753 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@787 -- # iptables-restore 00:11:44.753 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:44.753 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:44.753 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.753 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.753 15:08:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.665 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:46.665 00:11:46.665 real 0m11.559s 00:11:46.665 user 0m9.634s 00:11:46.665 sys 0m6.044s 00:11:46.665 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.665 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:46.665 ************************************ 00:11:46.665 END TEST nvmf_multitarget 00:11:46.665 ************************************ 00:11:46.665 15:08:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:46.665 15:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:46.665 15:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.665 15:08:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.927 ************************************ 00:11:46.927 START TEST nvmf_rpc 00:11:46.927 ************************************ 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:46.927 * Looking for test storage... 00:11:46.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.927 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:46.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.928 --rc genhtml_branch_coverage=1 00:11:46.928 --rc genhtml_function_coverage=1 00:11:46.928 --rc genhtml_legend=1 00:11:46.928 --rc geninfo_all_blocks=1 00:11:46.928 --rc geninfo_unexecuted_blocks=1 00:11:46.928 00:11:46.928 ' 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:46.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.928 --rc genhtml_branch_coverage=1 00:11:46.928 --rc genhtml_function_coverage=1 00:11:46.928 --rc genhtml_legend=1 00:11:46.928 --rc geninfo_all_blocks=1 00:11:46.928 --rc geninfo_unexecuted_blocks=1 00:11:46.928 00:11:46.928 ' 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:46.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.928 --rc genhtml_branch_coverage=1 00:11:46.928 --rc genhtml_function_coverage=1 00:11:46.928 --rc genhtml_legend=1 00:11:46.928 --rc geninfo_all_blocks=1 00:11:46.928 --rc geninfo_unexecuted_blocks=1 00:11:46.928 00:11:46.928 ' 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:46.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.928 --rc genhtml_branch_coverage=1 00:11:46.928 --rc genhtml_function_coverage=1 00:11:46.928 --rc genhtml_legend=1 00:11:46.928 --rc geninfo_all_blocks=1 00:11:46.928 --rc geninfo_unexecuted_blocks=1 00:11:46.928 00:11:46.928 ' 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.928 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:47.190 15:08:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:55.413 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:55.413 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:55.413 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ up == up ]] 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.413 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:55.413 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # is_hw=yes 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:55.414 15:09:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:55.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:11:55.414 00:11:55.414 --- 10.0.0.2 ping statistics --- 00:11:55.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.414 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:55.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:11:55.414 00:11:55.414 --- 10.0.0.1 ping statistics --- 00:11:55.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.414 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # return 0 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # nvmfpid=3876386 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # waitforlisten 3876386 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3876386 ']' 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:55.414 15:09:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.414 [2024-10-01 15:09:04.287601] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:11:55.414 [2024-10-01 15:09:04.287675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.414 [2024-10-01 15:09:04.359598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.414 [2024-10-01 15:09:04.434786] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.414 [2024-10-01 15:09:04.434824] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.414 [2024-10-01 15:09:04.434833] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.414 [2024-10-01 15:09:04.434840] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.414 [2024-10-01 15:09:04.434845] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.414 [2024-10-01 15:09:04.434987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.414 [2024-10-01 15:09:04.435119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.414 [2024-10-01 15:09:04.435457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.414 [2024-10-01 15:09:04.435458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.414 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.414 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:55.414 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:55.414 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:55.414 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.414 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.414 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:55.414 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.414 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.414 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.414 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:55.414 "tick_rate": 2400000000, 00:11:55.414 "poll_groups": [ 00:11:55.414 { 00:11:55.414 "name": "nvmf_tgt_poll_group_000", 00:11:55.414 "admin_qpairs": 0, 00:11:55.414 "io_qpairs": 0, 00:11:55.414 "current_admin_qpairs": 0, 00:11:55.414 "current_io_qpairs": 0, 00:11:55.414 "pending_bdev_io": 0, 00:11:55.414 "completed_nvme_io": 0, 00:11:55.414 "transports": [] 00:11:55.414 }, 00:11:55.414 { 00:11:55.414 "name": "nvmf_tgt_poll_group_001", 00:11:55.414 "admin_qpairs": 0, 00:11:55.414 "io_qpairs": 0, 00:11:55.414 "current_admin_qpairs": 0, 00:11:55.414 "current_io_qpairs": 0, 00:11:55.414 "pending_bdev_io": 0, 00:11:55.414 "completed_nvme_io": 0, 00:11:55.414 "transports": [] 00:11:55.414 }, 00:11:55.414 { 00:11:55.414 "name": "nvmf_tgt_poll_group_002", 00:11:55.414 "admin_qpairs": 0, 00:11:55.414 "io_qpairs": 0, 00:11:55.414 "current_admin_qpairs": 0, 00:11:55.415 "current_io_qpairs": 0, 00:11:55.415 "pending_bdev_io": 0, 00:11:55.415 "completed_nvme_io": 0, 00:11:55.415 "transports": [] 00:11:55.415 }, 00:11:55.415 { 00:11:55.415 "name": "nvmf_tgt_poll_group_003", 00:11:55.415 "admin_qpairs": 0, 00:11:55.415 "io_qpairs": 0, 00:11:55.415 "current_admin_qpairs": 0, 00:11:55.415 "current_io_qpairs": 0, 00:11:55.415 "pending_bdev_io": 0, 00:11:55.415 "completed_nvme_io": 0, 00:11:55.415 "transports": [] 00:11:55.415 } 00:11:55.415 ] 00:11:55.415 }' 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.415 [2024-10-01 15:09:05.258369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.415 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.676 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.676 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:55.676 "tick_rate": 2400000000, 00:11:55.676 "poll_groups": [ 00:11:55.676 { 00:11:55.676 "name": "nvmf_tgt_poll_group_000", 00:11:55.676 "admin_qpairs": 0, 00:11:55.676 "io_qpairs": 0, 00:11:55.676 "current_admin_qpairs": 0, 00:11:55.676 "current_io_qpairs": 0, 00:11:55.676 "pending_bdev_io": 0, 00:11:55.676 "completed_nvme_io": 0, 00:11:55.676 "transports": [ 00:11:55.676 { 00:11:55.676 "trtype": "TCP" 00:11:55.677 } 00:11:55.677 ] 00:11:55.677 }, 00:11:55.677 { 00:11:55.677 "name": "nvmf_tgt_poll_group_001", 00:11:55.677 "admin_qpairs": 0, 00:11:55.677 "io_qpairs": 0, 00:11:55.677 "current_admin_qpairs": 0, 00:11:55.677 "current_io_qpairs": 0, 00:11:55.677 "pending_bdev_io": 0, 00:11:55.677 "completed_nvme_io": 0, 00:11:55.677 "transports": [ 00:11:55.677 { 00:11:55.677 "trtype": "TCP" 00:11:55.677 } 00:11:55.677 ] 00:11:55.677 }, 00:11:55.677 { 00:11:55.677 "name": "nvmf_tgt_poll_group_002", 00:11:55.677 "admin_qpairs": 0, 00:11:55.677 "io_qpairs": 0, 00:11:55.677 "current_admin_qpairs": 0, 00:11:55.677 "current_io_qpairs": 0, 00:11:55.677 "pending_bdev_io": 0, 00:11:55.677 "completed_nvme_io": 0, 00:11:55.677 "transports": [ 00:11:55.677 { 00:11:55.677 "trtype": "TCP" 00:11:55.677 } 00:11:55.677 ] 00:11:55.677 }, 00:11:55.677 { 00:11:55.677 "name": "nvmf_tgt_poll_group_003", 00:11:55.677 "admin_qpairs": 0, 00:11:55.677 "io_qpairs": 0, 00:11:55.677 "current_admin_qpairs": 0, 00:11:55.677 "current_io_qpairs": 0, 00:11:55.677 "pending_bdev_io": 0, 00:11:55.677 "completed_nvme_io": 0, 00:11:55.677 "transports": [ 00:11:55.677 { 00:11:55.677 "trtype": "TCP" 00:11:55.677 } 00:11:55.677 ] 00:11:55.677 } 00:11:55.677 ] 00:11:55.677 }' 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.677 Malloc1 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.677 [2024-10-01 15:09:05.450173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:11:55.677 [2024-10-01 15:09:05.486966] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204' 00:11:55.677 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:55.677 could not add new controller: failed to write to nvme-fabrics device 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.677 15:09:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.591 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.592 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:57.592 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.592 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:57.592 15:09:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:59.525 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.525 [2024-10-01 15:09:09.244187] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204' 00:11:59.525 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:59.525 could not add new controller: failed to write to nvme-fabrics device 00:11:59.526 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:59.526 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:59.526 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:59.526 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:59.526 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:59.526 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.526 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.526 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.526 15:09:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.436 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.436 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:01.436 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.436 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:01.436 15:09:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.345 15:09:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.345 [2024-10-01 15:09:12.994929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.345 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.345 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:03.345 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.345 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.345 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.345 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.345 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.345 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.345 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.345 15:09:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.756 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:04.756 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:04.756 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.756 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:04.756 15:09:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.299 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.300 [2024-10-01 15:09:16.761246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.300 15:09:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.682 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.683 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:08.683 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.683 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:08.683 15:09:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:10.593 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:10.593 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:10.593 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.593 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:10.593 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.593 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:10.593 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.853 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.853 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:10.853 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:10.853 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.854 [2024-10-01 15:09:20.530827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.854 15:09:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.240 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.240 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:12.240 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.240 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:12.240 15:09:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.786 [2024-10-01 15:09:24.285301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.786 15:09:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.170 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:16.170 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:16.171 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.171 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:16.171 15:09:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:18.083 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:18.083 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:18.083 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:18.083 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:18.083 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.083 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:18.083 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.344 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.344 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:18.344 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:18.344 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.344 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.344 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:18.344 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:18.344 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:18.344 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.344 15:09:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.344 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.344 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.344 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.344 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.344 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.344 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:18.344 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.345 [2024-10-01 15:09:28.039468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.345 15:09:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.745 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.745 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:19.745 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.745 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:19.745 15:09:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 [2024-10-01 15:09:31.765885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 [2024-10-01 15:09:31.826060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 [2024-10-01 15:09:31.894239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 [2024-10-01 15:09:31.962477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 [2024-10-01 15:09:32.030701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.291 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:22.291 "tick_rate": 2400000000, 00:12:22.291 "poll_groups": [ 00:12:22.291 { 00:12:22.291 "name": "nvmf_tgt_poll_group_000", 00:12:22.291 "admin_qpairs": 0, 00:12:22.291 "io_qpairs": 224, 00:12:22.291 "current_admin_qpairs": 0, 00:12:22.291 "current_io_qpairs": 0, 00:12:22.291 "pending_bdev_io": 0, 00:12:22.291 "completed_nvme_io": 276, 00:12:22.291 "transports": [ 00:12:22.291 { 00:12:22.291 "trtype": "TCP" 00:12:22.291 } 00:12:22.291 ] 00:12:22.291 }, 00:12:22.291 { 00:12:22.291 "name": "nvmf_tgt_poll_group_001", 00:12:22.291 "admin_qpairs": 1, 00:12:22.291 "io_qpairs": 223, 00:12:22.291 "current_admin_qpairs": 0, 00:12:22.291 "current_io_qpairs": 0, 00:12:22.291 "pending_bdev_io": 0, 00:12:22.291 "completed_nvme_io": 367, 00:12:22.291 "transports": [ 00:12:22.291 { 00:12:22.291 "trtype": "TCP" 00:12:22.291 } 00:12:22.291 ] 00:12:22.291 }, 00:12:22.291 { 00:12:22.291 "name": "nvmf_tgt_poll_group_002", 00:12:22.291 "admin_qpairs": 6, 00:12:22.291 "io_qpairs": 218, 00:12:22.291 "current_admin_qpairs": 0, 00:12:22.291 "current_io_qpairs": 0, 00:12:22.291 "pending_bdev_io": 0, 00:12:22.291 "completed_nvme_io": 220, 00:12:22.291 "transports": [ 00:12:22.291 { 00:12:22.291 "trtype": "TCP" 00:12:22.291 } 00:12:22.291 ] 00:12:22.291 }, 00:12:22.291 { 00:12:22.291 "name": "nvmf_tgt_poll_group_003", 00:12:22.291 "admin_qpairs": 0, 00:12:22.292 "io_qpairs": 224, 00:12:22.292 "current_admin_qpairs": 0, 00:12:22.292 "current_io_qpairs": 0, 00:12:22.292 "pending_bdev_io": 0, 00:12:22.292 "completed_nvme_io": 376, 00:12:22.292 "transports": [ 00:12:22.292 { 00:12:22.292 "trtype": "TCP" 00:12:22.292 } 00:12:22.292 ] 00:12:22.292 } 00:12:22.292 ] 00:12:22.292 }' 00:12:22.292 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:22.292 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:22.292 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:22.292 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.292 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:22.292 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:22.292 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.552 rmmod nvme_tcp 00:12:22.552 rmmod nvme_fabrics 00:12:22.552 rmmod nvme_keyring 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@513 -- # '[' -n 3876386 ']' 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # killprocess 3876386 00:12:22.552 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3876386 ']' 00:12:22.553 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3876386 00:12:22.553 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:22.553 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:22.553 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3876386 00:12:22.553 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:22.553 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:22.553 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3876386' 00:12:22.553 killing process with pid 3876386 00:12:22.553 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3876386 00:12:22.553 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3876386 00:12:22.814 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:22.814 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:22.814 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:22.814 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:22.814 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-save 00:12:22.814 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:22.814 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@787 -- # iptables-restore 00:12:22.814 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.814 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:22.814 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.814 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.814 15:09:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.727 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.727 00:12:24.727 real 0m38.007s 00:12:24.727 user 1m53.949s 00:12:24.727 sys 0m7.869s 00:12:24.727 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.727 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.727 ************************************ 00:12:24.727 END TEST nvmf_rpc 00:12:24.727 ************************************ 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.989 ************************************ 00:12:24.989 START TEST nvmf_invalid 00:12:24.989 ************************************ 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:24.989 * Looking for test storage... 00:12:24.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:24.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.989 --rc genhtml_branch_coverage=1 00:12:24.989 --rc genhtml_function_coverage=1 00:12:24.989 --rc genhtml_legend=1 00:12:24.989 --rc geninfo_all_blocks=1 00:12:24.989 --rc geninfo_unexecuted_blocks=1 00:12:24.989 00:12:24.989 ' 00:12:24.989 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:24.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.990 --rc genhtml_branch_coverage=1 00:12:24.990 --rc genhtml_function_coverage=1 00:12:24.990 --rc genhtml_legend=1 00:12:24.990 --rc geninfo_all_blocks=1 00:12:24.990 --rc geninfo_unexecuted_blocks=1 00:12:24.990 00:12:24.990 ' 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:24.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.990 --rc genhtml_branch_coverage=1 00:12:24.990 --rc genhtml_function_coverage=1 00:12:24.990 --rc genhtml_legend=1 00:12:24.990 --rc geninfo_all_blocks=1 00:12:24.990 --rc geninfo_unexecuted_blocks=1 00:12:24.990 00:12:24.990 ' 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:24.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.990 --rc genhtml_branch_coverage=1 00:12:24.990 --rc genhtml_function_coverage=1 00:12:24.990 --rc genhtml_legend=1 00:12:24.990 --rc geninfo_all_blocks=1 00:12:24.990 --rc geninfo_unexecuted_blocks=1 00:12:24.990 00:12:24.990 ' 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.990 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:25.251 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:25.252 15:09:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:33.395 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.395 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.395 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.395 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.395 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.395 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.395 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.395 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.395 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.395 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:33.396 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:33.396 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:33.396 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:33.396 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # is_hw=yes 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:33.396 15:09:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:33.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:12:33.396 00:12:33.396 --- 10.0.0.2 ping statistics --- 00:12:33.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.396 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:12:33.396 00:12:33.396 --- 10.0.0.1 ping statistics --- 00:12:33.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.396 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # return 0 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # nvmfpid=3886697 00:12:33.396 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # waitforlisten 3886697 00:12:33.397 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.397 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3886697 ']' 00:12:33.397 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.397 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.397 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.397 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.397 15:09:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:33.397 [2024-10-01 15:09:42.219580] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:12:33.397 [2024-10-01 15:09:42.219631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.397 [2024-10-01 15:09:42.287678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.397 [2024-10-01 15:09:42.353166] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.397 [2024-10-01 15:09:42.353206] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.397 [2024-10-01 15:09:42.353214] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.397 [2024-10-01 15:09:42.353221] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.397 [2024-10-01 15:09:42.353227] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.397 [2024-10-01 15:09:42.353310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.397 [2024-10-01 15:09:42.353441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.397 [2024-10-01 15:09:42.353596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.397 [2024-10-01 15:09:42.353597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.397 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.397 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:12:33.397 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:33.397 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:33.397 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:33.397 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.397 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:33.397 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20233 00:12:33.397 [2024-10-01 15:09:43.213546] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:33.397 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:33.397 { 00:12:33.397 "nqn": "nqn.2016-06.io.spdk:cnode20233", 00:12:33.397 "tgt_name": "foobar", 00:12:33.397 "method": "nvmf_create_subsystem", 00:12:33.397 "req_id": 1 00:12:33.397 } 00:12:33.397 Got JSON-RPC error response 00:12:33.397 response: 00:12:33.397 { 00:12:33.397 "code": -32603, 00:12:33.397 "message": "Unable to find target foobar" 00:12:33.397 }' 00:12:33.397 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:33.397 { 00:12:33.397 "nqn": "nqn.2016-06.io.spdk:cnode20233", 00:12:33.397 "tgt_name": "foobar", 00:12:33.397 "method": "nvmf_create_subsystem", 00:12:33.397 "req_id": 1 00:12:33.397 } 00:12:33.397 Got JSON-RPC error response 00:12:33.397 response: 00:12:33.397 { 00:12:33.397 "code": -32603, 00:12:33.397 "message": "Unable to find target foobar" 00:12:33.397 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:33.397 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:33.397 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22512 00:12:33.658 [2024-10-01 15:09:43.406211] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22512: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:33.658 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:33.658 { 00:12:33.658 "nqn": "nqn.2016-06.io.spdk:cnode22512", 00:12:33.658 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:33.658 "method": "nvmf_create_subsystem", 00:12:33.658 "req_id": 1 00:12:33.658 } 00:12:33.658 Got JSON-RPC error response 00:12:33.658 response: 00:12:33.658 { 00:12:33.658 "code": -32602, 00:12:33.658 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:33.658 }' 00:12:33.658 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:33.658 { 00:12:33.658 "nqn": "nqn.2016-06.io.spdk:cnode22512", 00:12:33.658 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:33.658 "method": "nvmf_create_subsystem", 00:12:33.658 "req_id": 1 00:12:33.658 } 00:12:33.658 Got JSON-RPC error response 00:12:33.658 response: 00:12:33.658 { 00:12:33.658 "code": -32602, 00:12:33.658 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:33.658 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:33.658 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:33.658 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30488 00:12:33.919 [2024-10-01 15:09:43.594774] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30488: invalid model number 'SPDK_Controller' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:33.919 { 00:12:33.919 "nqn": "nqn.2016-06.io.spdk:cnode30488", 00:12:33.919 "model_number": "SPDK_Controller\u001f", 00:12:33.919 "method": "nvmf_create_subsystem", 00:12:33.919 "req_id": 1 00:12:33.919 } 00:12:33.919 Got JSON-RPC error response 00:12:33.919 response: 00:12:33.919 { 00:12:33.919 "code": -32602, 00:12:33.919 "message": "Invalid MN SPDK_Controller\u001f" 00:12:33.919 }' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:33.919 { 00:12:33.919 "nqn": "nqn.2016-06.io.spdk:cnode30488", 00:12:33.919 "model_number": "SPDK_Controller\u001f", 00:12:33.919 "method": "nvmf_create_subsystem", 00:12:33.919 "req_id": 1 00:12:33.919 } 00:12:33.919 Got JSON-RPC error response 00:12:33.919 response: 00:12:33.919 { 00:12:33.919 "code": -32602, 00:12:33.919 "message": "Invalid MN SPDK_Controller\u001f" 00:12:33.919 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:33.919 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:33.920 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ . == \- ]] 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '.]1&0^rI}uf,"]TY$DPt' 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '.]1&0^rI}uf,"]TY$DPt' nqn.2016-06.io.spdk:cnode11058 00:12:34.180 [2024-10-01 15:09:43.947890] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11058: invalid serial number '.]1&0^rI}uf,"]TY$DPt' 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:34.180 { 00:12:34.180 "nqn": "nqn.2016-06.io.spdk:cnode11058", 00:12:34.180 "serial_number": ".]1&0^\u007frI}uf,\"]TY$DPt", 00:12:34.180 "method": "nvmf_create_subsystem", 00:12:34.180 "req_id": 1 00:12:34.180 } 00:12:34.180 Got JSON-RPC error response 00:12:34.180 response: 00:12:34.180 { 00:12:34.180 "code": -32602, 00:12:34.180 "message": "Invalid SN .]1&0^\u007frI}uf,\"]TY$DPt" 00:12:34.180 }' 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:34.180 { 00:12:34.180 "nqn": "nqn.2016-06.io.spdk:cnode11058", 00:12:34.180 "serial_number": ".]1&0^\u007frI}uf,\"]TY$DPt", 00:12:34.180 "method": "nvmf_create_subsystem", 00:12:34.180 "req_id": 1 00:12:34.180 } 00:12:34.180 Got JSON-RPC error response 00:12:34.180 response: 00:12:34.180 { 00:12:34.180 "code": -32602, 00:12:34.180 "message": "Invalid SN .]1&0^\u007frI}uf,\"]TY$DPt" 00:12:34.180 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:34.180 15:09:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:34.180 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.442 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:34.443 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:34.703 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:34.703 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:34.703 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:34.703 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:12:34.703 15:09:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'j?#1v/\1@Z`fuZ$t>UmdY?mSnI5!DUmdY?mSnI5!DUmdY?mSnI5!DUmdY?mSnI5!DUmdY?mSnI5!D /dev/null' 00:12:36.781 15:09:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.693 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:38.693 00:12:38.693 real 0m13.819s 00:12:38.693 user 0m20.530s 00:12:38.693 sys 0m6.447s 00:12:38.693 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:38.693 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:38.693 ************************************ 00:12:38.693 END TEST nvmf_invalid 00:12:38.693 ************************************ 00:12:38.693 15:09:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:38.693 15:09:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:38.693 15:09:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:38.693 15:09:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:38.693 ************************************ 00:12:38.693 START TEST nvmf_connect_stress 00:12:38.693 ************************************ 00:12:38.693 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:38.956 * Looking for test storage... 00:12:38.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:38.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.956 --rc genhtml_branch_coverage=1 00:12:38.956 --rc genhtml_function_coverage=1 00:12:38.956 --rc genhtml_legend=1 00:12:38.956 --rc geninfo_all_blocks=1 00:12:38.956 --rc geninfo_unexecuted_blocks=1 00:12:38.956 00:12:38.956 ' 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:38.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.956 --rc genhtml_branch_coverage=1 00:12:38.956 --rc genhtml_function_coverage=1 00:12:38.956 --rc genhtml_legend=1 00:12:38.956 --rc geninfo_all_blocks=1 00:12:38.956 --rc geninfo_unexecuted_blocks=1 00:12:38.956 00:12:38.956 ' 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:38.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.956 --rc genhtml_branch_coverage=1 00:12:38.956 --rc genhtml_function_coverage=1 00:12:38.956 --rc genhtml_legend=1 00:12:38.956 --rc geninfo_all_blocks=1 00:12:38.956 --rc geninfo_unexecuted_blocks=1 00:12:38.956 00:12:38.956 ' 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:38.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.956 --rc genhtml_branch_coverage=1 00:12:38.956 --rc genhtml_function_coverage=1 00:12:38.956 --rc genhtml_legend=1 00:12:38.956 --rc geninfo_all_blocks=1 00:12:38.956 --rc geninfo_unexecuted_blocks=1 00:12:38.956 00:12:38.956 ' 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.956 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:38.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:38.957 15:09:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:47.100 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:47.100 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:47.100 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ up == up ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:47.100 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # is_hw=yes 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.100 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.101 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:47.101 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:47.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:12:47.101 00:12:47.101 --- 10.0.0.2 ping statistics --- 00:12:47.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.101 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:12:47.101 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:12:47.101 00:12:47.101 --- 10.0.0.1 ping statistics --- 00:12:47.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.101 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:12:47.101 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.101 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # return 0 00:12:47.101 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:47.101 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.101 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:47.101 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:47.101 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.101 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:47.101 15:09:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # nvmfpid=3891735 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # waitforlisten 3891735 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3891735 ']' 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.101 [2024-10-01 15:09:56.085686] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:12:47.101 [2024-10-01 15:09:56.085753] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.101 [2024-10-01 15:09:56.174455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:47.101 [2024-10-01 15:09:56.269235] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.101 [2024-10-01 15:09:56.269296] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.101 [2024-10-01 15:09:56.269306] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.101 [2024-10-01 15:09:56.269313] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.101 [2024-10-01 15:09:56.269319] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.101 [2024-10-01 15:09:56.269445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.101 [2024-10-01 15:09:56.269609] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.101 [2024-10-01 15:09:56.269610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.101 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.101 [2024-10-01 15:09:56.940758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.377 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.377 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:47.377 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.377 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.377 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.377 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.377 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.377 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.377 [2024-10-01 15:09:56.978371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.377 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.377 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:47.378 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.378 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.378 NULL1 00:12:47.378 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.378 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3891920 00:12:47.378 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:47.378 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:47.378 15:09:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.378 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.639 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.639 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:47.639 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.639 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.639 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.900 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.900 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:47.900 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.900 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.900 15:09:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.472 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.472 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:48.472 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.472 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.472 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.732 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.732 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:48.732 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.732 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.732 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.992 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.992 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:48.992 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.992 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.992 15:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.252 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.252 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:49.252 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.252 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.252 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.824 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.824 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:49.824 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.824 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.824 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.085 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.085 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:50.085 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.085 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.085 15:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.345 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.345 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:50.345 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.345 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.345 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.607 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.607 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:50.607 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.607 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.607 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.868 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.868 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:50.868 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.868 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.868 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.440 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.440 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:51.440 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.440 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.440 15:10:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.701 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.701 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:51.701 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.701 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.701 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.018 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.018 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:52.018 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.018 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.018 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.307 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.307 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:52.307 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.307 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.307 15:10:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.574 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.574 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:52.574 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.574 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.574 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:52.835 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.835 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:52.835 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.835 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.835 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.095 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.095 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:53.095 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.095 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.095 15:10:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.666 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.666 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:53.666 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.666 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.666 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.926 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.927 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:53.927 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.927 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.927 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.188 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.188 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:54.188 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.188 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.188 15:10:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.449 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.449 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:54.449 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.449 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.449 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.021 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.021 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:55.021 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.021 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.021 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.281 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.281 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:55.281 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.281 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.281 15:10:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.543 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.543 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:55.543 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.543 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.543 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:55.803 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.803 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:55.803 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.803 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.803 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.064 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.064 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:56.064 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.064 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.064 15:10:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.636 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.636 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:56.636 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.636 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.636 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:56.897 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.897 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:56.897 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:56.897 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.897 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.164 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.164 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:57.164 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:57.164 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.164 15:10:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:57.427 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:57.427 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.427 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3891920 00:12:57.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3891920) - No such process 00:12:57.427 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3891920 00:12:57.427 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:57.427 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:57.427 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:57.427 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:57.427 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:57.427 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.427 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:57.427 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.427 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.427 rmmod nvme_tcp 00:12:57.428 rmmod nvme_fabrics 00:12:57.428 rmmod nvme_keyring 00:12:57.428 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.428 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:57.428 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:57.428 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@513 -- # '[' -n 3891735 ']' 00:12:57.428 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # killprocess 3891735 00:12:57.428 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3891735 ']' 00:12:57.428 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3891735 00:12:57.428 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:12:57.428 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:57.428 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3891735 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3891735' 00:12:57.689 killing process with pid 3891735 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3891735 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3891735 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-save 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # iptables-restore 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.689 15:10:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:00.233 00:13:00.233 real 0m20.993s 00:13:00.233 user 0m42.154s 00:13:00.233 sys 0m8.956s 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.233 ************************************ 00:13:00.233 END TEST nvmf_connect_stress 00:13:00.233 ************************************ 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:00.233 ************************************ 00:13:00.233 START TEST nvmf_fused_ordering 00:13:00.233 ************************************ 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:00.233 * Looking for test storage... 00:13:00.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:00.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.233 --rc genhtml_branch_coverage=1 00:13:00.233 --rc genhtml_function_coverage=1 00:13:00.233 --rc genhtml_legend=1 00:13:00.233 --rc geninfo_all_blocks=1 00:13:00.233 --rc geninfo_unexecuted_blocks=1 00:13:00.233 00:13:00.233 ' 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:00.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.233 --rc genhtml_branch_coverage=1 00:13:00.233 --rc genhtml_function_coverage=1 00:13:00.233 --rc genhtml_legend=1 00:13:00.233 --rc geninfo_all_blocks=1 00:13:00.233 --rc geninfo_unexecuted_blocks=1 00:13:00.233 00:13:00.233 ' 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:00.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.233 --rc genhtml_branch_coverage=1 00:13:00.233 --rc genhtml_function_coverage=1 00:13:00.233 --rc genhtml_legend=1 00:13:00.233 --rc geninfo_all_blocks=1 00:13:00.233 --rc geninfo_unexecuted_blocks=1 00:13:00.233 00:13:00.233 ' 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:00.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.233 --rc genhtml_branch_coverage=1 00:13:00.233 --rc genhtml_function_coverage=1 00:13:00.233 --rc genhtml_legend=1 00:13:00.233 --rc geninfo_all_blocks=1 00:13:00.233 --rc geninfo_unexecuted_blocks=1 00:13:00.233 00:13:00.233 ' 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.233 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:00.234 15:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.372 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:08.372 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:08.372 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:08.372 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:08.372 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:08.373 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:08.373 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:08.373 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:08.373 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # is_hw=yes 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.373 15:10:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.373 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.373 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:08.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:13:08.374 00:13:08.374 --- 10.0.0.2 ping statistics --- 00:13:08.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.374 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:13:08.374 00:13:08.374 --- 10.0.0.1 ping statistics --- 00:13:08.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.374 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # return 0 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # nvmfpid=3898274 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # waitforlisten 3898274 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3898274 ']' 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:08.374 15:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.374 [2024-10-01 15:10:17.332378] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:13:08.374 [2024-10-01 15:10:17.332432] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.374 [2024-10-01 15:10:17.418483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.374 [2024-10-01 15:10:17.494448] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.374 [2024-10-01 15:10:17.494507] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.374 [2024-10-01 15:10:17.494515] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.374 [2024-10-01 15:10:17.494522] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.374 [2024-10-01 15:10:17.494528] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.374 [2024-10-01 15:10:17.494556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.374 [2024-10-01 15:10:18.182026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.374 [2024-10-01 15:10:18.206333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.374 NULL1 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.374 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.636 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.636 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:08.636 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.636 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:08.636 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.636 15:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:08.636 [2024-10-01 15:10:18.277268] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:13:08.636 [2024-10-01 15:10:18.277320] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898312 ] 00:13:09.206 Attached to nqn.2016-06.io.spdk:cnode1 00:13:09.206 Namespace ID: 1 size: 1GB 00:13:09.206 fused_ordering(0) 00:13:09.206 fused_ordering(1) 00:13:09.206 fused_ordering(2) 00:13:09.206 fused_ordering(3) 00:13:09.206 fused_ordering(4) 00:13:09.206 fused_ordering(5) 00:13:09.206 fused_ordering(6) 00:13:09.206 fused_ordering(7) 00:13:09.206 fused_ordering(8) 00:13:09.206 fused_ordering(9) 00:13:09.206 fused_ordering(10) 00:13:09.206 fused_ordering(11) 00:13:09.206 fused_ordering(12) 00:13:09.206 fused_ordering(13) 00:13:09.206 fused_ordering(14) 00:13:09.206 fused_ordering(15) 00:13:09.206 fused_ordering(16) 00:13:09.206 fused_ordering(17) 00:13:09.206 fused_ordering(18) 00:13:09.206 fused_ordering(19) 00:13:09.206 fused_ordering(20) 00:13:09.206 fused_ordering(21) 00:13:09.206 fused_ordering(22) 00:13:09.206 fused_ordering(23) 00:13:09.206 fused_ordering(24) 00:13:09.206 fused_ordering(25) 00:13:09.206 fused_ordering(26) 00:13:09.206 fused_ordering(27) 00:13:09.206 fused_ordering(28) 00:13:09.206 fused_ordering(29) 00:13:09.206 fused_ordering(30) 00:13:09.206 fused_ordering(31) 00:13:09.206 fused_ordering(32) 00:13:09.206 fused_ordering(33) 00:13:09.206 fused_ordering(34) 00:13:09.206 fused_ordering(35) 00:13:09.206 fused_ordering(36) 00:13:09.206 fused_ordering(37) 00:13:09.206 fused_ordering(38) 00:13:09.206 fused_ordering(39) 00:13:09.206 fused_ordering(40) 00:13:09.206 fused_ordering(41) 00:13:09.206 fused_ordering(42) 00:13:09.206 fused_ordering(43) 00:13:09.206 fused_ordering(44) 00:13:09.206 fused_ordering(45) 00:13:09.206 fused_ordering(46) 00:13:09.206 fused_ordering(47) 00:13:09.206 fused_ordering(48) 00:13:09.206 fused_ordering(49) 00:13:09.206 fused_ordering(50) 00:13:09.206 fused_ordering(51) 00:13:09.206 fused_ordering(52) 00:13:09.206 fused_ordering(53) 00:13:09.206 fused_ordering(54) 00:13:09.206 fused_ordering(55) 00:13:09.206 fused_ordering(56) 00:13:09.206 fused_ordering(57) 00:13:09.206 fused_ordering(58) 00:13:09.206 fused_ordering(59) 00:13:09.206 fused_ordering(60) 00:13:09.206 fused_ordering(61) 00:13:09.206 fused_ordering(62) 00:13:09.206 fused_ordering(63) 00:13:09.206 fused_ordering(64) 00:13:09.206 fused_ordering(65) 00:13:09.206 fused_ordering(66) 00:13:09.206 fused_ordering(67) 00:13:09.206 fused_ordering(68) 00:13:09.206 fused_ordering(69) 00:13:09.206 fused_ordering(70) 00:13:09.206 fused_ordering(71) 00:13:09.206 fused_ordering(72) 00:13:09.206 fused_ordering(73) 00:13:09.206 fused_ordering(74) 00:13:09.206 fused_ordering(75) 00:13:09.206 fused_ordering(76) 00:13:09.206 fused_ordering(77) 00:13:09.206 fused_ordering(78) 00:13:09.206 fused_ordering(79) 00:13:09.206 fused_ordering(80) 00:13:09.206 fused_ordering(81) 00:13:09.206 fused_ordering(82) 00:13:09.206 fused_ordering(83) 00:13:09.206 fused_ordering(84) 00:13:09.206 fused_ordering(85) 00:13:09.206 fused_ordering(86) 00:13:09.206 fused_ordering(87) 00:13:09.206 fused_ordering(88) 00:13:09.206 fused_ordering(89) 00:13:09.206 fused_ordering(90) 00:13:09.206 fused_ordering(91) 00:13:09.206 fused_ordering(92) 00:13:09.206 fused_ordering(93) 00:13:09.206 fused_ordering(94) 00:13:09.206 fused_ordering(95) 00:13:09.206 fused_ordering(96) 00:13:09.206 fused_ordering(97) 00:13:09.206 fused_ordering(98) 00:13:09.206 fused_ordering(99) 00:13:09.207 fused_ordering(100) 00:13:09.207 fused_ordering(101) 00:13:09.207 fused_ordering(102) 00:13:09.207 fused_ordering(103) 00:13:09.207 fused_ordering(104) 00:13:09.207 fused_ordering(105) 00:13:09.207 fused_ordering(106) 00:13:09.207 fused_ordering(107) 00:13:09.207 fused_ordering(108) 00:13:09.207 fused_ordering(109) 00:13:09.207 fused_ordering(110) 00:13:09.207 fused_ordering(111) 00:13:09.207 fused_ordering(112) 00:13:09.207 fused_ordering(113) 00:13:09.207 fused_ordering(114) 00:13:09.207 fused_ordering(115) 00:13:09.207 fused_ordering(116) 00:13:09.207 fused_ordering(117) 00:13:09.207 fused_ordering(118) 00:13:09.207 fused_ordering(119) 00:13:09.207 fused_ordering(120) 00:13:09.207 fused_ordering(121) 00:13:09.207 fused_ordering(122) 00:13:09.207 fused_ordering(123) 00:13:09.207 fused_ordering(124) 00:13:09.207 fused_ordering(125) 00:13:09.207 fused_ordering(126) 00:13:09.207 fused_ordering(127) 00:13:09.207 fused_ordering(128) 00:13:09.207 fused_ordering(129) 00:13:09.207 fused_ordering(130) 00:13:09.207 fused_ordering(131) 00:13:09.207 fused_ordering(132) 00:13:09.207 fused_ordering(133) 00:13:09.207 fused_ordering(134) 00:13:09.207 fused_ordering(135) 00:13:09.207 fused_ordering(136) 00:13:09.207 fused_ordering(137) 00:13:09.207 fused_ordering(138) 00:13:09.207 fused_ordering(139) 00:13:09.207 fused_ordering(140) 00:13:09.207 fused_ordering(141) 00:13:09.207 fused_ordering(142) 00:13:09.207 fused_ordering(143) 00:13:09.207 fused_ordering(144) 00:13:09.207 fused_ordering(145) 00:13:09.207 fused_ordering(146) 00:13:09.207 fused_ordering(147) 00:13:09.207 fused_ordering(148) 00:13:09.207 fused_ordering(149) 00:13:09.207 fused_ordering(150) 00:13:09.207 fused_ordering(151) 00:13:09.207 fused_ordering(152) 00:13:09.207 fused_ordering(153) 00:13:09.207 fused_ordering(154) 00:13:09.207 fused_ordering(155) 00:13:09.207 fused_ordering(156) 00:13:09.207 fused_ordering(157) 00:13:09.207 fused_ordering(158) 00:13:09.207 fused_ordering(159) 00:13:09.207 fused_ordering(160) 00:13:09.207 fused_ordering(161) 00:13:09.207 fused_ordering(162) 00:13:09.207 fused_ordering(163) 00:13:09.207 fused_ordering(164) 00:13:09.207 fused_ordering(165) 00:13:09.207 fused_ordering(166) 00:13:09.207 fused_ordering(167) 00:13:09.207 fused_ordering(168) 00:13:09.207 fused_ordering(169) 00:13:09.207 fused_ordering(170) 00:13:09.207 fused_ordering(171) 00:13:09.207 fused_ordering(172) 00:13:09.207 fused_ordering(173) 00:13:09.207 fused_ordering(174) 00:13:09.207 fused_ordering(175) 00:13:09.207 fused_ordering(176) 00:13:09.207 fused_ordering(177) 00:13:09.207 fused_ordering(178) 00:13:09.207 fused_ordering(179) 00:13:09.207 fused_ordering(180) 00:13:09.207 fused_ordering(181) 00:13:09.207 fused_ordering(182) 00:13:09.207 fused_ordering(183) 00:13:09.207 fused_ordering(184) 00:13:09.207 fused_ordering(185) 00:13:09.207 fused_ordering(186) 00:13:09.207 fused_ordering(187) 00:13:09.207 fused_ordering(188) 00:13:09.207 fused_ordering(189) 00:13:09.207 fused_ordering(190) 00:13:09.207 fused_ordering(191) 00:13:09.207 fused_ordering(192) 00:13:09.207 fused_ordering(193) 00:13:09.207 fused_ordering(194) 00:13:09.207 fused_ordering(195) 00:13:09.207 fused_ordering(196) 00:13:09.207 fused_ordering(197) 00:13:09.207 fused_ordering(198) 00:13:09.207 fused_ordering(199) 00:13:09.207 fused_ordering(200) 00:13:09.207 fused_ordering(201) 00:13:09.207 fused_ordering(202) 00:13:09.207 fused_ordering(203) 00:13:09.207 fused_ordering(204) 00:13:09.207 fused_ordering(205) 00:13:09.468 fused_ordering(206) 00:13:09.468 fused_ordering(207) 00:13:09.468 fused_ordering(208) 00:13:09.468 fused_ordering(209) 00:13:09.468 fused_ordering(210) 00:13:09.468 fused_ordering(211) 00:13:09.468 fused_ordering(212) 00:13:09.468 fused_ordering(213) 00:13:09.468 fused_ordering(214) 00:13:09.468 fused_ordering(215) 00:13:09.468 fused_ordering(216) 00:13:09.468 fused_ordering(217) 00:13:09.468 fused_ordering(218) 00:13:09.468 fused_ordering(219) 00:13:09.468 fused_ordering(220) 00:13:09.468 fused_ordering(221) 00:13:09.468 fused_ordering(222) 00:13:09.468 fused_ordering(223) 00:13:09.468 fused_ordering(224) 00:13:09.468 fused_ordering(225) 00:13:09.468 fused_ordering(226) 00:13:09.468 fused_ordering(227) 00:13:09.468 fused_ordering(228) 00:13:09.468 fused_ordering(229) 00:13:09.468 fused_ordering(230) 00:13:09.468 fused_ordering(231) 00:13:09.468 fused_ordering(232) 00:13:09.468 fused_ordering(233) 00:13:09.468 fused_ordering(234) 00:13:09.468 fused_ordering(235) 00:13:09.468 fused_ordering(236) 00:13:09.468 fused_ordering(237) 00:13:09.468 fused_ordering(238) 00:13:09.468 fused_ordering(239) 00:13:09.468 fused_ordering(240) 00:13:09.468 fused_ordering(241) 00:13:09.468 fused_ordering(242) 00:13:09.468 fused_ordering(243) 00:13:09.468 fused_ordering(244) 00:13:09.468 fused_ordering(245) 00:13:09.468 fused_ordering(246) 00:13:09.468 fused_ordering(247) 00:13:09.468 fused_ordering(248) 00:13:09.468 fused_ordering(249) 00:13:09.468 fused_ordering(250) 00:13:09.468 fused_ordering(251) 00:13:09.468 fused_ordering(252) 00:13:09.468 fused_ordering(253) 00:13:09.468 fused_ordering(254) 00:13:09.468 fused_ordering(255) 00:13:09.468 fused_ordering(256) 00:13:09.468 fused_ordering(257) 00:13:09.468 fused_ordering(258) 00:13:09.468 fused_ordering(259) 00:13:09.468 fused_ordering(260) 00:13:09.468 fused_ordering(261) 00:13:09.468 fused_ordering(262) 00:13:09.468 fused_ordering(263) 00:13:09.468 fused_ordering(264) 00:13:09.468 fused_ordering(265) 00:13:09.468 fused_ordering(266) 00:13:09.468 fused_ordering(267) 00:13:09.468 fused_ordering(268) 00:13:09.468 fused_ordering(269) 00:13:09.468 fused_ordering(270) 00:13:09.468 fused_ordering(271) 00:13:09.468 fused_ordering(272) 00:13:09.468 fused_ordering(273) 00:13:09.468 fused_ordering(274) 00:13:09.468 fused_ordering(275) 00:13:09.468 fused_ordering(276) 00:13:09.468 fused_ordering(277) 00:13:09.468 fused_ordering(278) 00:13:09.468 fused_ordering(279) 00:13:09.468 fused_ordering(280) 00:13:09.468 fused_ordering(281) 00:13:09.468 fused_ordering(282) 00:13:09.468 fused_ordering(283) 00:13:09.468 fused_ordering(284) 00:13:09.468 fused_ordering(285) 00:13:09.468 fused_ordering(286) 00:13:09.468 fused_ordering(287) 00:13:09.468 fused_ordering(288) 00:13:09.468 fused_ordering(289) 00:13:09.468 fused_ordering(290) 00:13:09.468 fused_ordering(291) 00:13:09.468 fused_ordering(292) 00:13:09.468 fused_ordering(293) 00:13:09.468 fused_ordering(294) 00:13:09.468 fused_ordering(295) 00:13:09.468 fused_ordering(296) 00:13:09.468 fused_ordering(297) 00:13:09.468 fused_ordering(298) 00:13:09.468 fused_ordering(299) 00:13:09.468 fused_ordering(300) 00:13:09.468 fused_ordering(301) 00:13:09.468 fused_ordering(302) 00:13:09.468 fused_ordering(303) 00:13:09.468 fused_ordering(304) 00:13:09.468 fused_ordering(305) 00:13:09.468 fused_ordering(306) 00:13:09.468 fused_ordering(307) 00:13:09.468 fused_ordering(308) 00:13:09.468 fused_ordering(309) 00:13:09.468 fused_ordering(310) 00:13:09.468 fused_ordering(311) 00:13:09.468 fused_ordering(312) 00:13:09.468 fused_ordering(313) 00:13:09.468 fused_ordering(314) 00:13:09.468 fused_ordering(315) 00:13:09.468 fused_ordering(316) 00:13:09.468 fused_ordering(317) 00:13:09.468 fused_ordering(318) 00:13:09.468 fused_ordering(319) 00:13:09.468 fused_ordering(320) 00:13:09.468 fused_ordering(321) 00:13:09.468 fused_ordering(322) 00:13:09.468 fused_ordering(323) 00:13:09.468 fused_ordering(324) 00:13:09.468 fused_ordering(325) 00:13:09.468 fused_ordering(326) 00:13:09.468 fused_ordering(327) 00:13:09.468 fused_ordering(328) 00:13:09.468 fused_ordering(329) 00:13:09.468 fused_ordering(330) 00:13:09.468 fused_ordering(331) 00:13:09.468 fused_ordering(332) 00:13:09.468 fused_ordering(333) 00:13:09.468 fused_ordering(334) 00:13:09.468 fused_ordering(335) 00:13:09.468 fused_ordering(336) 00:13:09.468 fused_ordering(337) 00:13:09.468 fused_ordering(338) 00:13:09.468 fused_ordering(339) 00:13:09.468 fused_ordering(340) 00:13:09.468 fused_ordering(341) 00:13:09.468 fused_ordering(342) 00:13:09.468 fused_ordering(343) 00:13:09.468 fused_ordering(344) 00:13:09.468 fused_ordering(345) 00:13:09.468 fused_ordering(346) 00:13:09.468 fused_ordering(347) 00:13:09.468 fused_ordering(348) 00:13:09.468 fused_ordering(349) 00:13:09.468 fused_ordering(350) 00:13:09.468 fused_ordering(351) 00:13:09.468 fused_ordering(352) 00:13:09.468 fused_ordering(353) 00:13:09.468 fused_ordering(354) 00:13:09.468 fused_ordering(355) 00:13:09.468 fused_ordering(356) 00:13:09.468 fused_ordering(357) 00:13:09.468 fused_ordering(358) 00:13:09.468 fused_ordering(359) 00:13:09.468 fused_ordering(360) 00:13:09.468 fused_ordering(361) 00:13:09.468 fused_ordering(362) 00:13:09.468 fused_ordering(363) 00:13:09.468 fused_ordering(364) 00:13:09.468 fused_ordering(365) 00:13:09.468 fused_ordering(366) 00:13:09.468 fused_ordering(367) 00:13:09.468 fused_ordering(368) 00:13:09.468 fused_ordering(369) 00:13:09.468 fused_ordering(370) 00:13:09.468 fused_ordering(371) 00:13:09.468 fused_ordering(372) 00:13:09.468 fused_ordering(373) 00:13:09.468 fused_ordering(374) 00:13:09.468 fused_ordering(375) 00:13:09.468 fused_ordering(376) 00:13:09.468 fused_ordering(377) 00:13:09.468 fused_ordering(378) 00:13:09.468 fused_ordering(379) 00:13:09.468 fused_ordering(380) 00:13:09.468 fused_ordering(381) 00:13:09.468 fused_ordering(382) 00:13:09.468 fused_ordering(383) 00:13:09.468 fused_ordering(384) 00:13:09.468 fused_ordering(385) 00:13:09.468 fused_ordering(386) 00:13:09.468 fused_ordering(387) 00:13:09.468 fused_ordering(388) 00:13:09.468 fused_ordering(389) 00:13:09.468 fused_ordering(390) 00:13:09.468 fused_ordering(391) 00:13:09.468 fused_ordering(392) 00:13:09.468 fused_ordering(393) 00:13:09.468 fused_ordering(394) 00:13:09.468 fused_ordering(395) 00:13:09.468 fused_ordering(396) 00:13:09.468 fused_ordering(397) 00:13:09.468 fused_ordering(398) 00:13:09.468 fused_ordering(399) 00:13:09.468 fused_ordering(400) 00:13:09.468 fused_ordering(401) 00:13:09.468 fused_ordering(402) 00:13:09.468 fused_ordering(403) 00:13:09.468 fused_ordering(404) 00:13:09.468 fused_ordering(405) 00:13:09.468 fused_ordering(406) 00:13:09.468 fused_ordering(407) 00:13:09.468 fused_ordering(408) 00:13:09.468 fused_ordering(409) 00:13:09.468 fused_ordering(410) 00:13:09.729 fused_ordering(411) 00:13:09.729 fused_ordering(412) 00:13:09.729 fused_ordering(413) 00:13:09.729 fused_ordering(414) 00:13:09.729 fused_ordering(415) 00:13:09.729 fused_ordering(416) 00:13:09.729 fused_ordering(417) 00:13:09.729 fused_ordering(418) 00:13:09.729 fused_ordering(419) 00:13:09.729 fused_ordering(420) 00:13:09.729 fused_ordering(421) 00:13:09.729 fused_ordering(422) 00:13:09.729 fused_ordering(423) 00:13:09.729 fused_ordering(424) 00:13:09.729 fused_ordering(425) 00:13:09.729 fused_ordering(426) 00:13:09.729 fused_ordering(427) 00:13:09.729 fused_ordering(428) 00:13:09.729 fused_ordering(429) 00:13:09.729 fused_ordering(430) 00:13:09.729 fused_ordering(431) 00:13:09.729 fused_ordering(432) 00:13:09.729 fused_ordering(433) 00:13:09.729 fused_ordering(434) 00:13:09.729 fused_ordering(435) 00:13:09.729 fused_ordering(436) 00:13:09.729 fused_ordering(437) 00:13:09.729 fused_ordering(438) 00:13:09.729 fused_ordering(439) 00:13:09.729 fused_ordering(440) 00:13:09.729 fused_ordering(441) 00:13:09.729 fused_ordering(442) 00:13:09.729 fused_ordering(443) 00:13:09.729 fused_ordering(444) 00:13:09.729 fused_ordering(445) 00:13:09.729 fused_ordering(446) 00:13:09.729 fused_ordering(447) 00:13:09.729 fused_ordering(448) 00:13:09.729 fused_ordering(449) 00:13:09.729 fused_ordering(450) 00:13:09.729 fused_ordering(451) 00:13:09.729 fused_ordering(452) 00:13:09.729 fused_ordering(453) 00:13:09.729 fused_ordering(454) 00:13:09.729 fused_ordering(455) 00:13:09.729 fused_ordering(456) 00:13:09.729 fused_ordering(457) 00:13:09.729 fused_ordering(458) 00:13:09.729 fused_ordering(459) 00:13:09.729 fused_ordering(460) 00:13:09.729 fused_ordering(461) 00:13:09.729 fused_ordering(462) 00:13:09.729 fused_ordering(463) 00:13:09.729 fused_ordering(464) 00:13:09.729 fused_ordering(465) 00:13:09.729 fused_ordering(466) 00:13:09.729 fused_ordering(467) 00:13:09.729 fused_ordering(468) 00:13:09.729 fused_ordering(469) 00:13:09.729 fused_ordering(470) 00:13:09.729 fused_ordering(471) 00:13:09.729 fused_ordering(472) 00:13:09.729 fused_ordering(473) 00:13:09.729 fused_ordering(474) 00:13:09.729 fused_ordering(475) 00:13:09.729 fused_ordering(476) 00:13:09.729 fused_ordering(477) 00:13:09.729 fused_ordering(478) 00:13:09.729 fused_ordering(479) 00:13:09.729 fused_ordering(480) 00:13:09.729 fused_ordering(481) 00:13:09.729 fused_ordering(482) 00:13:09.729 fused_ordering(483) 00:13:09.729 fused_ordering(484) 00:13:09.729 fused_ordering(485) 00:13:09.729 fused_ordering(486) 00:13:09.729 fused_ordering(487) 00:13:09.729 fused_ordering(488) 00:13:09.729 fused_ordering(489) 00:13:09.729 fused_ordering(490) 00:13:09.729 fused_ordering(491) 00:13:09.729 fused_ordering(492) 00:13:09.729 fused_ordering(493) 00:13:09.729 fused_ordering(494) 00:13:09.729 fused_ordering(495) 00:13:09.729 fused_ordering(496) 00:13:09.729 fused_ordering(497) 00:13:09.729 fused_ordering(498) 00:13:09.729 fused_ordering(499) 00:13:09.729 fused_ordering(500) 00:13:09.729 fused_ordering(501) 00:13:09.729 fused_ordering(502) 00:13:09.729 fused_ordering(503) 00:13:09.729 fused_ordering(504) 00:13:09.729 fused_ordering(505) 00:13:09.729 fused_ordering(506) 00:13:09.729 fused_ordering(507) 00:13:09.729 fused_ordering(508) 00:13:09.729 fused_ordering(509) 00:13:09.729 fused_ordering(510) 00:13:09.729 fused_ordering(511) 00:13:09.729 fused_ordering(512) 00:13:09.729 fused_ordering(513) 00:13:09.729 fused_ordering(514) 00:13:09.729 fused_ordering(515) 00:13:09.729 fused_ordering(516) 00:13:09.729 fused_ordering(517) 00:13:09.729 fused_ordering(518) 00:13:09.729 fused_ordering(519) 00:13:09.729 fused_ordering(520) 00:13:09.729 fused_ordering(521) 00:13:09.729 fused_ordering(522) 00:13:09.729 fused_ordering(523) 00:13:09.729 fused_ordering(524) 00:13:09.729 fused_ordering(525) 00:13:09.729 fused_ordering(526) 00:13:09.729 fused_ordering(527) 00:13:09.729 fused_ordering(528) 00:13:09.729 fused_ordering(529) 00:13:09.729 fused_ordering(530) 00:13:09.729 fused_ordering(531) 00:13:09.729 fused_ordering(532) 00:13:09.729 fused_ordering(533) 00:13:09.729 fused_ordering(534) 00:13:09.729 fused_ordering(535) 00:13:09.729 fused_ordering(536) 00:13:09.729 fused_ordering(537) 00:13:09.729 fused_ordering(538) 00:13:09.729 fused_ordering(539) 00:13:09.729 fused_ordering(540) 00:13:09.729 fused_ordering(541) 00:13:09.729 fused_ordering(542) 00:13:09.729 fused_ordering(543) 00:13:09.729 fused_ordering(544) 00:13:09.729 fused_ordering(545) 00:13:09.729 fused_ordering(546) 00:13:09.729 fused_ordering(547) 00:13:09.729 fused_ordering(548) 00:13:09.729 fused_ordering(549) 00:13:09.729 fused_ordering(550) 00:13:09.729 fused_ordering(551) 00:13:09.729 fused_ordering(552) 00:13:09.729 fused_ordering(553) 00:13:09.729 fused_ordering(554) 00:13:09.729 fused_ordering(555) 00:13:09.729 fused_ordering(556) 00:13:09.729 fused_ordering(557) 00:13:09.729 fused_ordering(558) 00:13:09.729 fused_ordering(559) 00:13:09.729 fused_ordering(560) 00:13:09.729 fused_ordering(561) 00:13:09.729 fused_ordering(562) 00:13:09.729 fused_ordering(563) 00:13:09.729 fused_ordering(564) 00:13:09.729 fused_ordering(565) 00:13:09.729 fused_ordering(566) 00:13:09.729 fused_ordering(567) 00:13:09.730 fused_ordering(568) 00:13:09.730 fused_ordering(569) 00:13:09.730 fused_ordering(570) 00:13:09.730 fused_ordering(571) 00:13:09.730 fused_ordering(572) 00:13:09.730 fused_ordering(573) 00:13:09.730 fused_ordering(574) 00:13:09.730 fused_ordering(575) 00:13:09.730 fused_ordering(576) 00:13:09.730 fused_ordering(577) 00:13:09.730 fused_ordering(578) 00:13:09.730 fused_ordering(579) 00:13:09.730 fused_ordering(580) 00:13:09.730 fused_ordering(581) 00:13:09.730 fused_ordering(582) 00:13:09.730 fused_ordering(583) 00:13:09.730 fused_ordering(584) 00:13:09.730 fused_ordering(585) 00:13:09.730 fused_ordering(586) 00:13:09.730 fused_ordering(587) 00:13:09.730 fused_ordering(588) 00:13:09.730 fused_ordering(589) 00:13:09.730 fused_ordering(590) 00:13:09.730 fused_ordering(591) 00:13:09.730 fused_ordering(592) 00:13:09.730 fused_ordering(593) 00:13:09.730 fused_ordering(594) 00:13:09.730 fused_ordering(595) 00:13:09.730 fused_ordering(596) 00:13:09.730 fused_ordering(597) 00:13:09.730 fused_ordering(598) 00:13:09.730 fused_ordering(599) 00:13:09.730 fused_ordering(600) 00:13:09.730 fused_ordering(601) 00:13:09.730 fused_ordering(602) 00:13:09.730 fused_ordering(603) 00:13:09.730 fused_ordering(604) 00:13:09.730 fused_ordering(605) 00:13:09.730 fused_ordering(606) 00:13:09.730 fused_ordering(607) 00:13:09.730 fused_ordering(608) 00:13:09.730 fused_ordering(609) 00:13:09.730 fused_ordering(610) 00:13:09.730 fused_ordering(611) 00:13:09.730 fused_ordering(612) 00:13:09.730 fused_ordering(613) 00:13:09.730 fused_ordering(614) 00:13:09.730 fused_ordering(615) 00:13:10.303 fused_ordering(616) 00:13:10.303 fused_ordering(617) 00:13:10.303 fused_ordering(618) 00:13:10.303 fused_ordering(619) 00:13:10.303 fused_ordering(620) 00:13:10.303 fused_ordering(621) 00:13:10.303 fused_ordering(622) 00:13:10.303 fused_ordering(623) 00:13:10.303 fused_ordering(624) 00:13:10.303 fused_ordering(625) 00:13:10.303 fused_ordering(626) 00:13:10.303 fused_ordering(627) 00:13:10.303 fused_ordering(628) 00:13:10.303 fused_ordering(629) 00:13:10.303 fused_ordering(630) 00:13:10.303 fused_ordering(631) 00:13:10.303 fused_ordering(632) 00:13:10.303 fused_ordering(633) 00:13:10.303 fused_ordering(634) 00:13:10.303 fused_ordering(635) 00:13:10.303 fused_ordering(636) 00:13:10.303 fused_ordering(637) 00:13:10.303 fused_ordering(638) 00:13:10.303 fused_ordering(639) 00:13:10.303 fused_ordering(640) 00:13:10.303 fused_ordering(641) 00:13:10.303 fused_ordering(642) 00:13:10.303 fused_ordering(643) 00:13:10.303 fused_ordering(644) 00:13:10.303 fused_ordering(645) 00:13:10.303 fused_ordering(646) 00:13:10.303 fused_ordering(647) 00:13:10.303 fused_ordering(648) 00:13:10.303 fused_ordering(649) 00:13:10.303 fused_ordering(650) 00:13:10.303 fused_ordering(651) 00:13:10.303 fused_ordering(652) 00:13:10.303 fused_ordering(653) 00:13:10.303 fused_ordering(654) 00:13:10.303 fused_ordering(655) 00:13:10.303 fused_ordering(656) 00:13:10.303 fused_ordering(657) 00:13:10.303 fused_ordering(658) 00:13:10.303 fused_ordering(659) 00:13:10.303 fused_ordering(660) 00:13:10.303 fused_ordering(661) 00:13:10.303 fused_ordering(662) 00:13:10.303 fused_ordering(663) 00:13:10.303 fused_ordering(664) 00:13:10.303 fused_ordering(665) 00:13:10.303 fused_ordering(666) 00:13:10.303 fused_ordering(667) 00:13:10.303 fused_ordering(668) 00:13:10.303 fused_ordering(669) 00:13:10.303 fused_ordering(670) 00:13:10.303 fused_ordering(671) 00:13:10.303 fused_ordering(672) 00:13:10.303 fused_ordering(673) 00:13:10.303 fused_ordering(674) 00:13:10.303 fused_ordering(675) 00:13:10.303 fused_ordering(676) 00:13:10.303 fused_ordering(677) 00:13:10.303 fused_ordering(678) 00:13:10.303 fused_ordering(679) 00:13:10.303 fused_ordering(680) 00:13:10.303 fused_ordering(681) 00:13:10.303 fused_ordering(682) 00:13:10.303 fused_ordering(683) 00:13:10.303 fused_ordering(684) 00:13:10.303 fused_ordering(685) 00:13:10.303 fused_ordering(686) 00:13:10.303 fused_ordering(687) 00:13:10.303 fused_ordering(688) 00:13:10.303 fused_ordering(689) 00:13:10.303 fused_ordering(690) 00:13:10.303 fused_ordering(691) 00:13:10.303 fused_ordering(692) 00:13:10.303 fused_ordering(693) 00:13:10.303 fused_ordering(694) 00:13:10.303 fused_ordering(695) 00:13:10.303 fused_ordering(696) 00:13:10.303 fused_ordering(697) 00:13:10.303 fused_ordering(698) 00:13:10.303 fused_ordering(699) 00:13:10.303 fused_ordering(700) 00:13:10.303 fused_ordering(701) 00:13:10.303 fused_ordering(702) 00:13:10.303 fused_ordering(703) 00:13:10.303 fused_ordering(704) 00:13:10.303 fused_ordering(705) 00:13:10.303 fused_ordering(706) 00:13:10.303 fused_ordering(707) 00:13:10.303 fused_ordering(708) 00:13:10.303 fused_ordering(709) 00:13:10.303 fused_ordering(710) 00:13:10.303 fused_ordering(711) 00:13:10.303 fused_ordering(712) 00:13:10.303 fused_ordering(713) 00:13:10.303 fused_ordering(714) 00:13:10.303 fused_ordering(715) 00:13:10.303 fused_ordering(716) 00:13:10.303 fused_ordering(717) 00:13:10.303 fused_ordering(718) 00:13:10.303 fused_ordering(719) 00:13:10.303 fused_ordering(720) 00:13:10.303 fused_ordering(721) 00:13:10.303 fused_ordering(722) 00:13:10.303 fused_ordering(723) 00:13:10.303 fused_ordering(724) 00:13:10.303 fused_ordering(725) 00:13:10.303 fused_ordering(726) 00:13:10.303 fused_ordering(727) 00:13:10.303 fused_ordering(728) 00:13:10.303 fused_ordering(729) 00:13:10.303 fused_ordering(730) 00:13:10.303 fused_ordering(731) 00:13:10.303 fused_ordering(732) 00:13:10.303 fused_ordering(733) 00:13:10.303 fused_ordering(734) 00:13:10.303 fused_ordering(735) 00:13:10.303 fused_ordering(736) 00:13:10.303 fused_ordering(737) 00:13:10.303 fused_ordering(738) 00:13:10.303 fused_ordering(739) 00:13:10.303 fused_ordering(740) 00:13:10.303 fused_ordering(741) 00:13:10.303 fused_ordering(742) 00:13:10.303 fused_ordering(743) 00:13:10.303 fused_ordering(744) 00:13:10.303 fused_ordering(745) 00:13:10.303 fused_ordering(746) 00:13:10.303 fused_ordering(747) 00:13:10.303 fused_ordering(748) 00:13:10.303 fused_ordering(749) 00:13:10.303 fused_ordering(750) 00:13:10.303 fused_ordering(751) 00:13:10.303 fused_ordering(752) 00:13:10.303 fused_ordering(753) 00:13:10.303 fused_ordering(754) 00:13:10.303 fused_ordering(755) 00:13:10.303 fused_ordering(756) 00:13:10.303 fused_ordering(757) 00:13:10.303 fused_ordering(758) 00:13:10.303 fused_ordering(759) 00:13:10.303 fused_ordering(760) 00:13:10.303 fused_ordering(761) 00:13:10.303 fused_ordering(762) 00:13:10.303 fused_ordering(763) 00:13:10.303 fused_ordering(764) 00:13:10.303 fused_ordering(765) 00:13:10.303 fused_ordering(766) 00:13:10.303 fused_ordering(767) 00:13:10.303 fused_ordering(768) 00:13:10.303 fused_ordering(769) 00:13:10.303 fused_ordering(770) 00:13:10.303 fused_ordering(771) 00:13:10.303 fused_ordering(772) 00:13:10.303 fused_ordering(773) 00:13:10.303 fused_ordering(774) 00:13:10.303 fused_ordering(775) 00:13:10.303 fused_ordering(776) 00:13:10.303 fused_ordering(777) 00:13:10.303 fused_ordering(778) 00:13:10.303 fused_ordering(779) 00:13:10.303 fused_ordering(780) 00:13:10.303 fused_ordering(781) 00:13:10.303 fused_ordering(782) 00:13:10.303 fused_ordering(783) 00:13:10.303 fused_ordering(784) 00:13:10.303 fused_ordering(785) 00:13:10.303 fused_ordering(786) 00:13:10.303 fused_ordering(787) 00:13:10.303 fused_ordering(788) 00:13:10.303 fused_ordering(789) 00:13:10.303 fused_ordering(790) 00:13:10.303 fused_ordering(791) 00:13:10.303 fused_ordering(792) 00:13:10.303 fused_ordering(793) 00:13:10.303 fused_ordering(794) 00:13:10.303 fused_ordering(795) 00:13:10.303 fused_ordering(796) 00:13:10.303 fused_ordering(797) 00:13:10.303 fused_ordering(798) 00:13:10.303 fused_ordering(799) 00:13:10.303 fused_ordering(800) 00:13:10.303 fused_ordering(801) 00:13:10.303 fused_ordering(802) 00:13:10.303 fused_ordering(803) 00:13:10.303 fused_ordering(804) 00:13:10.303 fused_ordering(805) 00:13:10.303 fused_ordering(806) 00:13:10.303 fused_ordering(807) 00:13:10.303 fused_ordering(808) 00:13:10.304 fused_ordering(809) 00:13:10.304 fused_ordering(810) 00:13:10.304 fused_ordering(811) 00:13:10.304 fused_ordering(812) 00:13:10.304 fused_ordering(813) 00:13:10.304 fused_ordering(814) 00:13:10.304 fused_ordering(815) 00:13:10.304 fused_ordering(816) 00:13:10.304 fused_ordering(817) 00:13:10.304 fused_ordering(818) 00:13:10.304 fused_ordering(819) 00:13:10.304 fused_ordering(820) 00:13:10.875 fused_ordering(821) 00:13:10.875 fused_ordering(822) 00:13:10.875 fused_ordering(823) 00:13:10.875 fused_ordering(824) 00:13:10.875 fused_ordering(825) 00:13:10.875 fused_ordering(826) 00:13:10.875 fused_ordering(827) 00:13:10.875 fused_ordering(828) 00:13:10.875 fused_ordering(829) 00:13:10.875 fused_ordering(830) 00:13:10.875 fused_ordering(831) 00:13:10.875 fused_ordering(832) 00:13:10.875 fused_ordering(833) 00:13:10.875 fused_ordering(834) 00:13:10.875 fused_ordering(835) 00:13:10.875 fused_ordering(836) 00:13:10.875 fused_ordering(837) 00:13:10.875 fused_ordering(838) 00:13:10.875 fused_ordering(839) 00:13:10.875 fused_ordering(840) 00:13:10.875 fused_ordering(841) 00:13:10.875 fused_ordering(842) 00:13:10.875 fused_ordering(843) 00:13:10.875 fused_ordering(844) 00:13:10.875 fused_ordering(845) 00:13:10.875 fused_ordering(846) 00:13:10.875 fused_ordering(847) 00:13:10.875 fused_ordering(848) 00:13:10.875 fused_ordering(849) 00:13:10.875 fused_ordering(850) 00:13:10.875 fused_ordering(851) 00:13:10.875 fused_ordering(852) 00:13:10.875 fused_ordering(853) 00:13:10.875 fused_ordering(854) 00:13:10.875 fused_ordering(855) 00:13:10.875 fused_ordering(856) 00:13:10.875 fused_ordering(857) 00:13:10.875 fused_ordering(858) 00:13:10.875 fused_ordering(859) 00:13:10.875 fused_ordering(860) 00:13:10.875 fused_ordering(861) 00:13:10.875 fused_ordering(862) 00:13:10.875 fused_ordering(863) 00:13:10.875 fused_ordering(864) 00:13:10.875 fused_ordering(865) 00:13:10.875 fused_ordering(866) 00:13:10.875 fused_ordering(867) 00:13:10.875 fused_ordering(868) 00:13:10.875 fused_ordering(869) 00:13:10.875 fused_ordering(870) 00:13:10.875 fused_ordering(871) 00:13:10.875 fused_ordering(872) 00:13:10.875 fused_ordering(873) 00:13:10.875 fused_ordering(874) 00:13:10.875 fused_ordering(875) 00:13:10.875 fused_ordering(876) 00:13:10.875 fused_ordering(877) 00:13:10.875 fused_ordering(878) 00:13:10.875 fused_ordering(879) 00:13:10.875 fused_ordering(880) 00:13:10.875 fused_ordering(881) 00:13:10.875 fused_ordering(882) 00:13:10.875 fused_ordering(883) 00:13:10.875 fused_ordering(884) 00:13:10.875 fused_ordering(885) 00:13:10.875 fused_ordering(886) 00:13:10.875 fused_ordering(887) 00:13:10.875 fused_ordering(888) 00:13:10.875 fused_ordering(889) 00:13:10.875 fused_ordering(890) 00:13:10.875 fused_ordering(891) 00:13:10.875 fused_ordering(892) 00:13:10.875 fused_ordering(893) 00:13:10.875 fused_ordering(894) 00:13:10.875 fused_ordering(895) 00:13:10.875 fused_ordering(896) 00:13:10.875 fused_ordering(897) 00:13:10.875 fused_ordering(898) 00:13:10.875 fused_ordering(899) 00:13:10.875 fused_ordering(900) 00:13:10.875 fused_ordering(901) 00:13:10.875 fused_ordering(902) 00:13:10.875 fused_ordering(903) 00:13:10.875 fused_ordering(904) 00:13:10.875 fused_ordering(905) 00:13:10.875 fused_ordering(906) 00:13:10.875 fused_ordering(907) 00:13:10.875 fused_ordering(908) 00:13:10.875 fused_ordering(909) 00:13:10.875 fused_ordering(910) 00:13:10.875 fused_ordering(911) 00:13:10.875 fused_ordering(912) 00:13:10.875 fused_ordering(913) 00:13:10.875 fused_ordering(914) 00:13:10.875 fused_ordering(915) 00:13:10.875 fused_ordering(916) 00:13:10.875 fused_ordering(917) 00:13:10.875 fused_ordering(918) 00:13:10.875 fused_ordering(919) 00:13:10.875 fused_ordering(920) 00:13:10.875 fused_ordering(921) 00:13:10.875 fused_ordering(922) 00:13:10.875 fused_ordering(923) 00:13:10.875 fused_ordering(924) 00:13:10.875 fused_ordering(925) 00:13:10.875 fused_ordering(926) 00:13:10.875 fused_ordering(927) 00:13:10.875 fused_ordering(928) 00:13:10.875 fused_ordering(929) 00:13:10.875 fused_ordering(930) 00:13:10.875 fused_ordering(931) 00:13:10.875 fused_ordering(932) 00:13:10.875 fused_ordering(933) 00:13:10.875 fused_ordering(934) 00:13:10.875 fused_ordering(935) 00:13:10.875 fused_ordering(936) 00:13:10.875 fused_ordering(937) 00:13:10.875 fused_ordering(938) 00:13:10.875 fused_ordering(939) 00:13:10.875 fused_ordering(940) 00:13:10.875 fused_ordering(941) 00:13:10.875 fused_ordering(942) 00:13:10.875 fused_ordering(943) 00:13:10.875 fused_ordering(944) 00:13:10.875 fused_ordering(945) 00:13:10.875 fused_ordering(946) 00:13:10.875 fused_ordering(947) 00:13:10.875 fused_ordering(948) 00:13:10.875 fused_ordering(949) 00:13:10.875 fused_ordering(950) 00:13:10.875 fused_ordering(951) 00:13:10.875 fused_ordering(952) 00:13:10.875 fused_ordering(953) 00:13:10.875 fused_ordering(954) 00:13:10.875 fused_ordering(955) 00:13:10.875 fused_ordering(956) 00:13:10.875 fused_ordering(957) 00:13:10.875 fused_ordering(958) 00:13:10.875 fused_ordering(959) 00:13:10.875 fused_ordering(960) 00:13:10.875 fused_ordering(961) 00:13:10.875 fused_ordering(962) 00:13:10.875 fused_ordering(963) 00:13:10.875 fused_ordering(964) 00:13:10.875 fused_ordering(965) 00:13:10.875 fused_ordering(966) 00:13:10.875 fused_ordering(967) 00:13:10.875 fused_ordering(968) 00:13:10.875 fused_ordering(969) 00:13:10.875 fused_ordering(970) 00:13:10.875 fused_ordering(971) 00:13:10.875 fused_ordering(972) 00:13:10.875 fused_ordering(973) 00:13:10.875 fused_ordering(974) 00:13:10.875 fused_ordering(975) 00:13:10.875 fused_ordering(976) 00:13:10.875 fused_ordering(977) 00:13:10.875 fused_ordering(978) 00:13:10.875 fused_ordering(979) 00:13:10.875 fused_ordering(980) 00:13:10.875 fused_ordering(981) 00:13:10.875 fused_ordering(982) 00:13:10.875 fused_ordering(983) 00:13:10.875 fused_ordering(984) 00:13:10.875 fused_ordering(985) 00:13:10.875 fused_ordering(986) 00:13:10.875 fused_ordering(987) 00:13:10.875 fused_ordering(988) 00:13:10.875 fused_ordering(989) 00:13:10.875 fused_ordering(990) 00:13:10.875 fused_ordering(991) 00:13:10.875 fused_ordering(992) 00:13:10.875 fused_ordering(993) 00:13:10.875 fused_ordering(994) 00:13:10.875 fused_ordering(995) 00:13:10.875 fused_ordering(996) 00:13:10.875 fused_ordering(997) 00:13:10.875 fused_ordering(998) 00:13:10.875 fused_ordering(999) 00:13:10.875 fused_ordering(1000) 00:13:10.875 fused_ordering(1001) 00:13:10.875 fused_ordering(1002) 00:13:10.875 fused_ordering(1003) 00:13:10.876 fused_ordering(1004) 00:13:10.876 fused_ordering(1005) 00:13:10.876 fused_ordering(1006) 00:13:10.876 fused_ordering(1007) 00:13:10.876 fused_ordering(1008) 00:13:10.876 fused_ordering(1009) 00:13:10.876 fused_ordering(1010) 00:13:10.876 fused_ordering(1011) 00:13:10.876 fused_ordering(1012) 00:13:10.876 fused_ordering(1013) 00:13:10.876 fused_ordering(1014) 00:13:10.876 fused_ordering(1015) 00:13:10.876 fused_ordering(1016) 00:13:10.876 fused_ordering(1017) 00:13:10.876 fused_ordering(1018) 00:13:10.876 fused_ordering(1019) 00:13:10.876 fused_ordering(1020) 00:13:10.876 fused_ordering(1021) 00:13:10.876 fused_ordering(1022) 00:13:10.876 fused_ordering(1023) 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.876 rmmod nvme_tcp 00:13:10.876 rmmod nvme_fabrics 00:13:10.876 rmmod nvme_keyring 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@513 -- # '[' -n 3898274 ']' 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # killprocess 3898274 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3898274 ']' 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3898274 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.876 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3898274 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3898274' 00:13:11.137 killing process with pid 3898274 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3898274 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3898274 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-save 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # iptables-restore 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.137 15:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.683 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:13.683 00:13:13.683 real 0m13.335s 00:13:13.683 user 0m7.221s 00:13:13.683 sys 0m6.896s 00:13:13.683 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.683 15:10:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:13.683 ************************************ 00:13:13.683 END TEST nvmf_fused_ordering 00:13:13.683 ************************************ 00:13:13.683 15:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:13.683 15:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.683 15:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.684 15:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.684 ************************************ 00:13:13.684 START TEST nvmf_ns_masking 00:13:13.684 ************************************ 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:13.684 * Looking for test storage... 00:13:13.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:13.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.684 --rc genhtml_branch_coverage=1 00:13:13.684 --rc genhtml_function_coverage=1 00:13:13.684 --rc genhtml_legend=1 00:13:13.684 --rc geninfo_all_blocks=1 00:13:13.684 --rc geninfo_unexecuted_blocks=1 00:13:13.684 00:13:13.684 ' 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:13.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.684 --rc genhtml_branch_coverage=1 00:13:13.684 --rc genhtml_function_coverage=1 00:13:13.684 --rc genhtml_legend=1 00:13:13.684 --rc geninfo_all_blocks=1 00:13:13.684 --rc geninfo_unexecuted_blocks=1 00:13:13.684 00:13:13.684 ' 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:13.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.684 --rc genhtml_branch_coverage=1 00:13:13.684 --rc genhtml_function_coverage=1 00:13:13.684 --rc genhtml_legend=1 00:13:13.684 --rc geninfo_all_blocks=1 00:13:13.684 --rc geninfo_unexecuted_blocks=1 00:13:13.684 00:13:13.684 ' 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:13.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.684 --rc genhtml_branch_coverage=1 00:13:13.684 --rc genhtml_function_coverage=1 00:13:13.684 --rc genhtml_legend=1 00:13:13.684 --rc geninfo_all_blocks=1 00:13:13.684 --rc geninfo_unexecuted_blocks=1 00:13:13.684 00:13:13.684 ' 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.684 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2d3ac1a7-d4db-48c0-8599-e52802e6bdd7 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9fbecc88-250a-464a-a691-47c7877033d2 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=9972490a-8c4c-4b45-91d6-cad915ad3077 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:13.685 15:10:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:21.834 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:21.834 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:21.834 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:21.834 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # is_hw=yes 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:21.834 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:21.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:13:21.835 00:13:21.835 --- 10.0.0.2 ping statistics --- 00:13:21.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.835 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:13:21.835 00:13:21.835 --- 10.0.0.1 ping statistics --- 00:13:21.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.835 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # return 0 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # nvmfpid=3902983 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # waitforlisten 3902983 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3902983 ']' 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.835 15:10:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:21.835 [2024-10-01 15:10:30.748855] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:13:21.835 [2024-10-01 15:10:30.748920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.835 [2024-10-01 15:10:30.820360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.835 [2024-10-01 15:10:30.894839] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.835 [2024-10-01 15:10:30.894879] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.835 [2024-10-01 15:10:30.894886] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.835 [2024-10-01 15:10:30.894893] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.835 [2024-10-01 15:10:30.894899] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.835 [2024-10-01 15:10:30.894917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.835 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.835 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:21.835 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:21.835 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:21.835 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:21.835 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.835 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:22.096 [2024-10-01 15:10:31.731748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.096 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:22.096 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:22.096 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:22.096 Malloc1 00:13:22.096 15:10:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:22.356 Malloc2 00:13:22.356 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:22.616 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:22.616 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.877 [2024-10-01 15:10:32.595975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.877 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:22.877 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9972490a-8c4c-4b45-91d6-cad915ad3077 -a 10.0.0.2 -s 4420 -i 4 00:13:23.139 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:23.139 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:23.139 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.139 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:23.139 15:10:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:25.050 [ 0]:0x1 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:25.050 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:25.310 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=80f4c8ea0bad49dca5a0b1c630707365 00:13:25.310 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 80f4c8ea0bad49dca5a0b1c630707365 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:25.310 15:10:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:25.310 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:25.310 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:25.310 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:25.310 [ 0]:0x1 00:13:25.310 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:25.310 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:25.310 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=80f4c8ea0bad49dca5a0b1c630707365 00:13:25.310 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 80f4c8ea0bad49dca5a0b1c630707365 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:25.310 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:25.310 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:25.310 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:25.310 [ 1]:0x2 00:13:25.310 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:25.310 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:25.573 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6784efabf74543e69a8e3a84b499d929 00:13:25.573 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6784efabf74543e69a8e3a84b499d929 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:25.573 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:25.573 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.573 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.833 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:25.833 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:25.833 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9972490a-8c4c-4b45-91d6-cad915ad3077 -a 10.0.0.2 -s 4420 -i 4 00:13:26.093 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:26.093 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:26.093 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.093 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:26.093 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:26.093 15:10:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:28.635 15:10:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:28.635 [ 0]:0x2 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6784efabf74543e69a8e3a84b499d929 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6784efabf74543e69a8e3a84b499d929 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:28.635 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:28.635 [ 0]:0x1 00:13:28.636 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:28.636 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:28.636 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=80f4c8ea0bad49dca5a0b1c630707365 00:13:28.636 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 80f4c8ea0bad49dca5a0b1c630707365 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.636 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:28.636 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:28.636 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:28.636 [ 1]:0x2 00:13:28.636 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:28.636 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:28.636 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6784efabf74543e69a8e3a84b499d929 00:13:28.636 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6784efabf74543e69a8e3a84b499d929 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.636 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:28.896 [ 0]:0x2 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6784efabf74543e69a8e3a84b499d929 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6784efabf74543e69a8e3a84b499d929 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.896 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:29.157 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:29.157 15:10:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9972490a-8c4c-4b45-91d6-cad915ad3077 -a 10.0.0.2 -s 4420 -i 4 00:13:29.417 15:10:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:29.417 15:10:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:29.417 15:10:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.417 15:10:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:29.417 15:10:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:29.417 15:10:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:31.331 [ 0]:0x1 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:31.331 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:31.593 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=80f4c8ea0bad49dca5a0b1c630707365 00:13:31.593 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 80f4c8ea0bad49dca5a0b1c630707365 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:31.593 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:31.593 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:31.593 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:31.593 [ 1]:0x2 00:13:31.593 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:31.593 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:31.593 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6784efabf74543e69a8e3a84b499d929 00:13:31.593 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6784efabf74543e69a8e3a84b499d929 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:31.593 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:31.855 [ 0]:0x2 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6784efabf74543e69a8e3a84b499d929 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6784efabf74543e69a8e3a84b499d929 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:31.855 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.856 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:31.856 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:31.856 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:32.118 [2024-10-01 15:10:41.770442] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:32.118 request: 00:13:32.118 { 00:13:32.118 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.118 "nsid": 2, 00:13:32.118 "host": "nqn.2016-06.io.spdk:host1", 00:13:32.118 "method": "nvmf_ns_remove_host", 00:13:32.118 "req_id": 1 00:13:32.118 } 00:13:32.118 Got JSON-RPC error response 00:13:32.118 response: 00:13:32.118 { 00:13:32.118 "code": -32602, 00:13:32.118 "message": "Invalid parameters" 00:13:32.118 } 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:32.118 [ 0]:0x2 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6784efabf74543e69a8e3a84b499d929 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6784efabf74543e69a8e3a84b499d929 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3905472 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3905472 /var/tmp/host.sock 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3905472 ']' 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:32.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.118 15:10:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:32.380 [2024-10-01 15:10:42.021344] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:13:32.380 [2024-10-01 15:10:42.021398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905472 ] 00:13:32.380 [2024-10-01 15:10:42.098761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.380 [2024-10-01 15:10:42.162962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.321 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.321 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:33.321 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.321 15:10:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:33.321 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2d3ac1a7-d4db-48c0-8599-e52802e6bdd7 00:13:33.321 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:13:33.321 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2D3AC1A7D4DB48C08599E52802E6BDD7 -i 00:13:33.581 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9fbecc88-250a-464a-a691-47c7877033d2 00:13:33.581 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@783 -- # tr -d - 00:13:33.582 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9FBECC88250A464AA69147C7877033D2 -i 00:13:33.842 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:33.842 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:34.102 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:34.102 15:10:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:34.363 nvme0n1 00:13:34.363 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:34.363 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:34.622 nvme1n2 00:13:34.622 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:34.622 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:34.622 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:34.622 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:34.622 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:34.881 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:34.881 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:34.881 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:34.881 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:34.881 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2d3ac1a7-d4db-48c0-8599-e52802e6bdd7 == \2\d\3\a\c\1\a\7\-\d\4\d\b\-\4\8\c\0\-\8\5\9\9\-\e\5\2\8\0\2\e\6\b\d\d\7 ]] 00:13:34.881 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:34.881 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:34.881 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:35.140 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9fbecc88-250a-464a-a691-47c7877033d2 == \9\f\b\e\c\c\8\8\-\2\5\0\a\-\4\6\4\a\-\a\6\9\1\-\4\7\c\7\8\7\7\0\3\3\d\2 ]] 00:13:35.140 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3905472 00:13:35.140 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3905472 ']' 00:13:35.140 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3905472 00:13:35.140 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:35.141 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:35.141 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3905472 00:13:35.141 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:35.141 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:35.141 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3905472' 00:13:35.141 killing process with pid 3905472 00:13:35.141 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3905472 00:13:35.141 15:10:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3905472 00:13:35.400 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:35.660 rmmod nvme_tcp 00:13:35.660 rmmod nvme_fabrics 00:13:35.660 rmmod nvme_keyring 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@513 -- # '[' -n 3902983 ']' 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # killprocess 3902983 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3902983 ']' 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3902983 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3902983 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3902983' 00:13:35.660 killing process with pid 3902983 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3902983 00:13:35.660 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3902983 00:13:35.921 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:35.921 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:35.921 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:35.921 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:35.921 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-save 00:13:35.921 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:35.921 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # iptables-restore 00:13:35.921 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:35.921 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:35.921 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.922 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.922 15:10:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:38.467 00:13:38.467 real 0m24.706s 00:13:38.467 user 0m24.595s 00:13:38.467 sys 0m7.842s 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:38.467 ************************************ 00:13:38.467 END TEST nvmf_ns_masking 00:13:38.467 ************************************ 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:38.467 ************************************ 00:13:38.467 START TEST nvmf_nvme_cli 00:13:38.467 ************************************ 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:38.467 * Looking for test storage... 00:13:38.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:38.467 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:38.468 15:10:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:38.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.468 --rc genhtml_branch_coverage=1 00:13:38.468 --rc genhtml_function_coverage=1 00:13:38.468 --rc genhtml_legend=1 00:13:38.468 --rc geninfo_all_blocks=1 00:13:38.468 --rc geninfo_unexecuted_blocks=1 00:13:38.468 00:13:38.468 ' 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:38.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.468 --rc genhtml_branch_coverage=1 00:13:38.468 --rc genhtml_function_coverage=1 00:13:38.468 --rc genhtml_legend=1 00:13:38.468 --rc geninfo_all_blocks=1 00:13:38.468 --rc geninfo_unexecuted_blocks=1 00:13:38.468 00:13:38.468 ' 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:38.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.468 --rc genhtml_branch_coverage=1 00:13:38.468 --rc genhtml_function_coverage=1 00:13:38.468 --rc genhtml_legend=1 00:13:38.468 --rc geninfo_all_blocks=1 00:13:38.468 --rc geninfo_unexecuted_blocks=1 00:13:38.468 00:13:38.468 ' 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:38.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:38.468 --rc genhtml_branch_coverage=1 00:13:38.468 --rc genhtml_function_coverage=1 00:13:38.468 --rc genhtml_legend=1 00:13:38.468 --rc geninfo_all_blocks=1 00:13:38.468 --rc geninfo_unexecuted_blocks=1 00:13:38.468 00:13:38.468 ' 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:38.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:38.468 15:10:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:45.060 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.060 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:45.060 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:45.060 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:45.060 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:45.060 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:45.060 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:45.322 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:45.322 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:45.322 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.322 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ up == up ]] 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:45.323 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # is_hw=yes 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.323 15:10:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.323 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.323 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.323 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:45.323 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:45.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:13:45.585 00:13:45.585 --- 10.0.0.2 ping statistics --- 00:13:45.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.585 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:13:45.585 00:13:45.585 --- 10.0.0.1 ping statistics --- 00:13:45.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.585 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # return 0 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # nvmfpid=3910191 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # waitforlisten 3910191 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3910191 ']' 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.585 15:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:45.585 [2024-10-01 15:10:55.328468] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:13:45.585 [2024-10-01 15:10:55.328569] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.585 [2024-10-01 15:10:55.403142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.846 [2024-10-01 15:10:55.480654] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.847 [2024-10-01 15:10:55.480693] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.847 [2024-10-01 15:10:55.480705] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.847 [2024-10-01 15:10:55.480712] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.847 [2024-10-01 15:10:55.480718] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.847 [2024-10-01 15:10:55.480863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.847 [2024-10-01 15:10:55.480982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.847 [2024-10-01 15:10:55.481149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.847 [2024-10-01 15:10:55.481271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:46.419 [2024-10-01 15:10:56.185158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:46.419 Malloc0 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:46.419 Malloc1 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.419 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:46.420 [2024-10-01 15:10:56.274943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 4420 00:13:46.680 00:13:46.680 Discovery Log Number of Records 2, Generation counter 2 00:13:46.680 =====Discovery Log Entry 0====== 00:13:46.680 trtype: tcp 00:13:46.680 adrfam: ipv4 00:13:46.680 subtype: current discovery subsystem 00:13:46.680 treq: not required 00:13:46.680 portid: 0 00:13:46.680 trsvcid: 4420 00:13:46.680 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:46.680 traddr: 10.0.0.2 00:13:46.680 eflags: explicit discovery connections, duplicate discovery information 00:13:46.680 sectype: none 00:13:46.680 =====Discovery Log Entry 1====== 00:13:46.680 trtype: tcp 00:13:46.680 adrfam: ipv4 00:13:46.680 subtype: nvme subsystem 00:13:46.680 treq: not required 00:13:46.680 portid: 0 00:13:46.680 trsvcid: 4420 00:13:46.680 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:46.680 traddr: 10.0.0.2 00:13:46.680 eflags: none 00:13:46.680 sectype: none 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:46.680 15:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:48.594 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:48.594 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:13:48.594 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:48.594 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:48.594 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:48.594 15:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:50.622 15:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:50.622 15:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:50.622 15:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:50.622 15:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:50.622 15:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:50.622 15:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:50.622 15:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:50.622 15:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:13:50.622 15:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:13:50.622 15:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:13:50.622 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:13:50.622 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:13:50.622 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:13:50.622 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:13:50.622 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:50.623 /dev/nvme0n2 ]] 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@546 -- # local dev _ 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@545 -- # nvme list 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ Node == /dev/nvme* ]] 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ --------------------- == /dev/nvme* ]] 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n1 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # echo /dev/nvme0n2 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # read -r dev _ 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:50.623 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:50.900 rmmod nvme_tcp 00:13:50.900 rmmod nvme_fabrics 00:13:50.900 rmmod nvme_keyring 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@513 -- # '[' -n 3910191 ']' 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # killprocess 3910191 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3910191 ']' 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3910191 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3910191 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3910191' 00:13:50.900 killing process with pid 3910191 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3910191 00:13:50.900 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3910191 00:13:51.161 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:51.161 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:51.161 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:51.161 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:51.161 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-save 00:13:51.161 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:51.161 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@787 -- # iptables-restore 00:13:51.161 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:51.161 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:51.161 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.161 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.161 15:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.709 15:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:53.709 00:13:53.709 real 0m15.182s 00:13:53.709 user 0m23.754s 00:13:53.709 sys 0m6.151s 00:13:53.709 15:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.709 15:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:53.709 ************************************ 00:13:53.709 END TEST nvmf_nvme_cli 00:13:53.709 ************************************ 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:53.709 ************************************ 00:13:53.709 START TEST nvmf_vfio_user 00:13:53.709 ************************************ 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:53.709 * Looking for test storage... 00:13:53.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:53.709 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:53.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.710 --rc genhtml_branch_coverage=1 00:13:53.710 --rc genhtml_function_coverage=1 00:13:53.710 --rc genhtml_legend=1 00:13:53.710 --rc geninfo_all_blocks=1 00:13:53.710 --rc geninfo_unexecuted_blocks=1 00:13:53.710 00:13:53.710 ' 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:53.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.710 --rc genhtml_branch_coverage=1 00:13:53.710 --rc genhtml_function_coverage=1 00:13:53.710 --rc genhtml_legend=1 00:13:53.710 --rc geninfo_all_blocks=1 00:13:53.710 --rc geninfo_unexecuted_blocks=1 00:13:53.710 00:13:53.710 ' 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:53.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.710 --rc genhtml_branch_coverage=1 00:13:53.710 --rc genhtml_function_coverage=1 00:13:53.710 --rc genhtml_legend=1 00:13:53.710 --rc geninfo_all_blocks=1 00:13:53.710 --rc geninfo_unexecuted_blocks=1 00:13:53.710 00:13:53.710 ' 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:53.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.710 --rc genhtml_branch_coverage=1 00:13:53.710 --rc genhtml_function_coverage=1 00:13:53.710 --rc genhtml_legend=1 00:13:53.710 --rc geninfo_all_blocks=1 00:13:53.710 --rc geninfo_unexecuted_blocks=1 00:13:53.710 00:13:53.710 ' 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:53.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3912011 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3912011' 00:13:53.710 Process pid: 3912011 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3912011 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3912011 ']' 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:53.710 15:11:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:53.710 [2024-10-01 15:11:03.358360] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:13:53.711 [2024-10-01 15:11:03.358425] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.711 [2024-10-01 15:11:03.424096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.711 [2024-10-01 15:11:03.498583] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.711 [2024-10-01 15:11:03.498626] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.711 [2024-10-01 15:11:03.498634] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.711 [2024-10-01 15:11:03.498641] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.711 [2024-10-01 15:11:03.498647] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.711 [2024-10-01 15:11:03.498789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.711 [2024-10-01 15:11:03.498915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.711 [2024-10-01 15:11:03.499073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.711 [2024-10-01 15:11:03.499074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.654 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.654 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:13:54.654 15:11:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:55.598 15:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:55.598 15:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:55.598 15:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:55.598 15:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:55.598 15:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:55.598 15:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:55.858 Malloc1 00:13:55.858 15:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:56.120 15:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:56.120 15:11:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:56.381 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:56.381 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:56.381 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:56.642 Malloc2 00:13:56.642 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:56.642 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:56.903 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:57.167 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:57.167 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:57.167 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:57.167 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:57.167 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:57.167 15:11:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:57.167 [2024-10-01 15:11:06.871235] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:13:57.167 [2024-10-01 15:11:06.871282] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3912706 ] 00:13:57.167 [2024-10-01 15:11:06.902614] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:57.167 [2024-10-01 15:11:06.915833] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:57.167 [2024-10-01 15:11:06.915855] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe330ea8000 00:13:57.167 [2024-10-01 15:11:06.916841] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:57.167 [2024-10-01 15:11:06.917836] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:57.167 [2024-10-01 15:11:06.918845] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:57.167 [2024-10-01 15:11:06.919862] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:57.167 [2024-10-01 15:11:06.920854] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:57.167 [2024-10-01 15:11:06.921857] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:57.167 [2024-10-01 15:11:06.922866] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:57.167 [2024-10-01 15:11:06.923878] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:57.167 [2024-10-01 15:11:06.924888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:57.167 [2024-10-01 15:11:06.924897] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe330e9d000 00:13:57.167 [2024-10-01 15:11:06.926322] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:57.167 [2024-10-01 15:11:06.943846] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:57.167 [2024-10-01 15:11:06.943872] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:57.167 [2024-10-01 15:11:06.949026] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:57.167 [2024-10-01 15:11:06.949075] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:57.167 [2024-10-01 15:11:06.949158] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:57.167 [2024-10-01 15:11:06.949182] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:57.167 [2024-10-01 15:11:06.949188] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:57.167 [2024-10-01 15:11:06.950028] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:57.167 [2024-10-01 15:11:06.950038] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:57.168 [2024-10-01 15:11:06.950045] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:57.168 [2024-10-01 15:11:06.951027] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:57.168 [2024-10-01 15:11:06.951036] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:57.168 [2024-10-01 15:11:06.951043] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:57.168 [2024-10-01 15:11:06.952038] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:57.168 [2024-10-01 15:11:06.952048] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:57.168 [2024-10-01 15:11:06.953045] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:57.168 [2024-10-01 15:11:06.953054] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:57.168 [2024-10-01 15:11:06.953059] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:57.168 [2024-10-01 15:11:06.953066] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:57.168 [2024-10-01 15:11:06.953171] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:57.168 [2024-10-01 15:11:06.953177] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:57.168 [2024-10-01 15:11:06.953182] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:57.168 [2024-10-01 15:11:06.954055] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:57.168 [2024-10-01 15:11:06.955057] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:57.168 [2024-10-01 15:11:06.956063] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:57.168 [2024-10-01 15:11:06.957059] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:57.168 [2024-10-01 15:11:06.957111] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:57.168 [2024-10-01 15:11:06.958070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:57.168 [2024-10-01 15:11:06.958078] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:57.168 [2024-10-01 15:11:06.958083] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958107] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:57.168 [2024-10-01 15:11:06.958118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958134] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:57.168 [2024-10-01 15:11:06.958139] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:57.168 [2024-10-01 15:11:06.958143] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.168 [2024-10-01 15:11:06.958156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:57.168 [2024-10-01 15:11:06.958195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:57.168 [2024-10-01 15:11:06.958205] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:57.168 [2024-10-01 15:11:06.958210] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:57.168 [2024-10-01 15:11:06.958215] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:57.168 [2024-10-01 15:11:06.958220] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:57.168 [2024-10-01 15:11:06.958225] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:57.168 [2024-10-01 15:11:06.958230] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:57.168 [2024-10-01 15:11:06.958235] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958243] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958253] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:57.168 [2024-10-01 15:11:06.958265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:57.168 [2024-10-01 15:11:06.958277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.168 [2024-10-01 15:11:06.958286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.168 [2024-10-01 15:11:06.958294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.168 [2024-10-01 15:11:06.958303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:57.168 [2024-10-01 15:11:06.958308] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958317] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:57.168 [2024-10-01 15:11:06.958339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:57.168 [2024-10-01 15:11:06.958345] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:57.168 [2024-10-01 15:11:06.958352] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958359] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958367] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:57.168 [2024-10-01 15:11:06.958385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:57.168 [2024-10-01 15:11:06.958447] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958455] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958462] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:57.168 [2024-10-01 15:11:06.958467] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:57.168 [2024-10-01 15:11:06.958471] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.168 [2024-10-01 15:11:06.958477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:57.168 [2024-10-01 15:11:06.958486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:57.168 [2024-10-01 15:11:06.958496] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:57.168 [2024-10-01 15:11:06.958504] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958513] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958521] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:57.168 [2024-10-01 15:11:06.958525] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:57.168 [2024-10-01 15:11:06.958528] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.168 [2024-10-01 15:11:06.958535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:57.168 [2024-10-01 15:11:06.958553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:57.168 [2024-10-01 15:11:06.958565] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958573] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958580] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:57.168 [2024-10-01 15:11:06.958584] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:57.168 [2024-10-01 15:11:06.958588] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.168 [2024-10-01 15:11:06.958594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:57.168 [2024-10-01 15:11:06.958605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:57.168 [2024-10-01 15:11:06.958614] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:57.168 [2024-10-01 15:11:06.958621] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:57.169 [2024-10-01 15:11:06.958630] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:57.169 [2024-10-01 15:11:06.958636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:57.169 [2024-10-01 15:11:06.958642] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:57.169 [2024-10-01 15:11:06.958647] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:57.169 [2024-10-01 15:11:06.958652] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:57.169 [2024-10-01 15:11:06.958657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:57.169 [2024-10-01 15:11:06.958662] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:57.169 [2024-10-01 15:11:06.958680] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:57.169 [2024-10-01 15:11:06.958692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:57.169 [2024-10-01 15:11:06.958704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:57.169 [2024-10-01 15:11:06.958714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:57.169 [2024-10-01 15:11:06.958726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:57.169 [2024-10-01 15:11:06.958736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:57.169 [2024-10-01 15:11:06.958747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:57.169 [2024-10-01 15:11:06.958754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:57.169 [2024-10-01 15:11:06.958767] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:57.169 [2024-10-01 15:11:06.958772] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:57.169 [2024-10-01 15:11:06.958776] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:57.169 [2024-10-01 15:11:06.958780] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:57.169 [2024-10-01 15:11:06.958783] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:57.169 [2024-10-01 15:11:06.958789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:57.169 [2024-10-01 15:11:06.958797] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:57.169 [2024-10-01 15:11:06.958802] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:57.169 [2024-10-01 15:11:06.958807] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.169 [2024-10-01 15:11:06.958813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:57.169 [2024-10-01 15:11:06.958820] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:57.169 [2024-10-01 15:11:06.958825] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:57.169 [2024-10-01 15:11:06.958828] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.169 [2024-10-01 15:11:06.958834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:57.169 [2024-10-01 15:11:06.958842] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:57.169 [2024-10-01 15:11:06.958846] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:57.169 [2024-10-01 15:11:06.958850] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:57.169 [2024-10-01 15:11:06.958856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:57.169 [2024-10-01 15:11:06.958863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:57.169 [2024-10-01 15:11:06.958875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:57.169 [2024-10-01 15:11:06.958885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:57.169 [2024-10-01 15:11:06.958893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:57.169 ===================================================== 00:13:57.169 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:57.169 ===================================================== 00:13:57.169 Controller Capabilities/Features 00:13:57.169 ================================ 00:13:57.169 Vendor ID: 4e58 00:13:57.169 Subsystem Vendor ID: 4e58 00:13:57.169 Serial Number: SPDK1 00:13:57.169 Model Number: SPDK bdev Controller 00:13:57.169 Firmware Version: 25.01 00:13:57.169 Recommended Arb Burst: 6 00:13:57.169 IEEE OUI Identifier: 8d 6b 50 00:13:57.169 Multi-path I/O 00:13:57.169 May have multiple subsystem ports: Yes 00:13:57.169 May have multiple controllers: Yes 00:13:57.169 Associated with SR-IOV VF: No 00:13:57.169 Max Data Transfer Size: 131072 00:13:57.169 Max Number of Namespaces: 32 00:13:57.169 Max Number of I/O Queues: 127 00:13:57.169 NVMe Specification Version (VS): 1.3 00:13:57.169 NVMe Specification Version (Identify): 1.3 00:13:57.169 Maximum Queue Entries: 256 00:13:57.169 Contiguous Queues Required: Yes 00:13:57.169 Arbitration Mechanisms Supported 00:13:57.169 Weighted Round Robin: Not Supported 00:13:57.169 Vendor Specific: Not Supported 00:13:57.169 Reset Timeout: 15000 ms 00:13:57.169 Doorbell Stride: 4 bytes 00:13:57.169 NVM Subsystem Reset: Not Supported 00:13:57.169 Command Sets Supported 00:13:57.169 NVM Command Set: Supported 00:13:57.169 Boot Partition: Not Supported 00:13:57.169 Memory Page Size Minimum: 4096 bytes 00:13:57.169 Memory Page Size Maximum: 4096 bytes 00:13:57.169 Persistent Memory Region: Not Supported 00:13:57.169 Optional Asynchronous Events Supported 00:13:57.169 Namespace Attribute Notices: Supported 00:13:57.169 Firmware Activation Notices: Not Supported 00:13:57.169 ANA Change Notices: Not Supported 00:13:57.169 PLE Aggregate Log Change Notices: Not Supported 00:13:57.169 LBA Status Info Alert Notices: Not Supported 00:13:57.169 EGE Aggregate Log Change Notices: Not Supported 00:13:57.169 Normal NVM Subsystem Shutdown event: Not Supported 00:13:57.169 Zone Descriptor Change Notices: Not Supported 00:13:57.169 Discovery Log Change Notices: Not Supported 00:13:57.169 Controller Attributes 00:13:57.169 128-bit Host Identifier: Supported 00:13:57.169 Non-Operational Permissive Mode: Not Supported 00:13:57.169 NVM Sets: Not Supported 00:13:57.169 Read Recovery Levels: Not Supported 00:13:57.169 Endurance Groups: Not Supported 00:13:57.169 Predictable Latency Mode: Not Supported 00:13:57.169 Traffic Based Keep ALive: Not Supported 00:13:57.169 Namespace Granularity: Not Supported 00:13:57.169 SQ Associations: Not Supported 00:13:57.169 UUID List: Not Supported 00:13:57.169 Multi-Domain Subsystem: Not Supported 00:13:57.169 Fixed Capacity Management: Not Supported 00:13:57.169 Variable Capacity Management: Not Supported 00:13:57.169 Delete Endurance Group: Not Supported 00:13:57.169 Delete NVM Set: Not Supported 00:13:57.169 Extended LBA Formats Supported: Not Supported 00:13:57.169 Flexible Data Placement Supported: Not Supported 00:13:57.169 00:13:57.169 Controller Memory Buffer Support 00:13:57.169 ================================ 00:13:57.169 Supported: No 00:13:57.169 00:13:57.169 Persistent Memory Region Support 00:13:57.169 ================================ 00:13:57.169 Supported: No 00:13:57.169 00:13:57.169 Admin Command Set Attributes 00:13:57.169 ============================ 00:13:57.169 Security Send/Receive: Not Supported 00:13:57.169 Format NVM: Not Supported 00:13:57.169 Firmware Activate/Download: Not Supported 00:13:57.169 Namespace Management: Not Supported 00:13:57.169 Device Self-Test: Not Supported 00:13:57.169 Directives: Not Supported 00:13:57.169 NVMe-MI: Not Supported 00:13:57.169 Virtualization Management: Not Supported 00:13:57.169 Doorbell Buffer Config: Not Supported 00:13:57.169 Get LBA Status Capability: Not Supported 00:13:57.169 Command & Feature Lockdown Capability: Not Supported 00:13:57.169 Abort Command Limit: 4 00:13:57.169 Async Event Request Limit: 4 00:13:57.169 Number of Firmware Slots: N/A 00:13:57.169 Firmware Slot 1 Read-Only: N/A 00:13:57.169 Firmware Activation Without Reset: N/A 00:13:57.169 Multiple Update Detection Support: N/A 00:13:57.169 Firmware Update Granularity: No Information Provided 00:13:57.169 Per-Namespace SMART Log: No 00:13:57.169 Asymmetric Namespace Access Log Page: Not Supported 00:13:57.169 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:57.169 Command Effects Log Page: Supported 00:13:57.169 Get Log Page Extended Data: Supported 00:13:57.169 Telemetry Log Pages: Not Supported 00:13:57.169 Persistent Event Log Pages: Not Supported 00:13:57.169 Supported Log Pages Log Page: May Support 00:13:57.169 Commands Supported & Effects Log Page: Not Supported 00:13:57.169 Feature Identifiers & Effects Log Page:May Support 00:13:57.169 NVMe-MI Commands & Effects Log Page: May Support 00:13:57.169 Data Area 4 for Telemetry Log: Not Supported 00:13:57.169 Error Log Page Entries Supported: 128 00:13:57.170 Keep Alive: Supported 00:13:57.170 Keep Alive Granularity: 10000 ms 00:13:57.170 00:13:57.170 NVM Command Set Attributes 00:13:57.170 ========================== 00:13:57.170 Submission Queue Entry Size 00:13:57.170 Max: 64 00:13:57.170 Min: 64 00:13:57.170 Completion Queue Entry Size 00:13:57.170 Max: 16 00:13:57.170 Min: 16 00:13:57.170 Number of Namespaces: 32 00:13:57.170 Compare Command: Supported 00:13:57.170 Write Uncorrectable Command: Not Supported 00:13:57.170 Dataset Management Command: Supported 00:13:57.170 Write Zeroes Command: Supported 00:13:57.170 Set Features Save Field: Not Supported 00:13:57.170 Reservations: Not Supported 00:13:57.170 Timestamp: Not Supported 00:13:57.170 Copy: Supported 00:13:57.170 Volatile Write Cache: Present 00:13:57.170 Atomic Write Unit (Normal): 1 00:13:57.170 Atomic Write Unit (PFail): 1 00:13:57.170 Atomic Compare & Write Unit: 1 00:13:57.170 Fused Compare & Write: Supported 00:13:57.170 Scatter-Gather List 00:13:57.170 SGL Command Set: Supported (Dword aligned) 00:13:57.170 SGL Keyed: Not Supported 00:13:57.170 SGL Bit Bucket Descriptor: Not Supported 00:13:57.170 SGL Metadata Pointer: Not Supported 00:13:57.170 Oversized SGL: Not Supported 00:13:57.170 SGL Metadata Address: Not Supported 00:13:57.170 SGL Offset: Not Supported 00:13:57.170 Transport SGL Data Block: Not Supported 00:13:57.170 Replay Protected Memory Block: Not Supported 00:13:57.170 00:13:57.170 Firmware Slot Information 00:13:57.170 ========================= 00:13:57.170 Active slot: 1 00:13:57.170 Slot 1 Firmware Revision: 25.01 00:13:57.170 00:13:57.170 00:13:57.170 Commands Supported and Effects 00:13:57.170 ============================== 00:13:57.170 Admin Commands 00:13:57.170 -------------- 00:13:57.170 Get Log Page (02h): Supported 00:13:57.170 Identify (06h): Supported 00:13:57.170 Abort (08h): Supported 00:13:57.170 Set Features (09h): Supported 00:13:57.170 Get Features (0Ah): Supported 00:13:57.170 Asynchronous Event Request (0Ch): Supported 00:13:57.170 Keep Alive (18h): Supported 00:13:57.170 I/O Commands 00:13:57.170 ------------ 00:13:57.170 Flush (00h): Supported LBA-Change 00:13:57.170 Write (01h): Supported LBA-Change 00:13:57.170 Read (02h): Supported 00:13:57.170 Compare (05h): Supported 00:13:57.170 Write Zeroes (08h): Supported LBA-Change 00:13:57.170 Dataset Management (09h): Supported LBA-Change 00:13:57.170 Copy (19h): Supported LBA-Change 00:13:57.170 00:13:57.170 Error Log 00:13:57.170 ========= 00:13:57.170 00:13:57.170 Arbitration 00:13:57.170 =========== 00:13:57.170 Arbitration Burst: 1 00:13:57.170 00:13:57.170 Power Management 00:13:57.170 ================ 00:13:57.170 Number of Power States: 1 00:13:57.170 Current Power State: Power State #0 00:13:57.170 Power State #0: 00:13:57.170 Max Power: 0.00 W 00:13:57.170 Non-Operational State: Operational 00:13:57.170 Entry Latency: Not Reported 00:13:57.170 Exit Latency: Not Reported 00:13:57.170 Relative Read Throughput: 0 00:13:57.170 Relative Read Latency: 0 00:13:57.170 Relative Write Throughput: 0 00:13:57.170 Relative Write Latency: 0 00:13:57.170 Idle Power: Not Reported 00:13:57.170 Active Power: Not Reported 00:13:57.170 Non-Operational Permissive Mode: Not Supported 00:13:57.170 00:13:57.170 Health Information 00:13:57.170 ================== 00:13:57.170 Critical Warnings: 00:13:57.170 Available Spare Space: OK 00:13:57.170 Temperature: OK 00:13:57.170 Device Reliability: OK 00:13:57.170 Read Only: No 00:13:57.170 Volatile Memory Backup: OK 00:13:57.170 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:57.170 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:57.170 Available Spare: 0% 00:13:57.170 Available Sp[2024-10-01 15:11:06.958991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:57.170 [2024-10-01 15:11:06.959007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:57.170 [2024-10-01 15:11:06.959035] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:57.170 [2024-10-01 15:11:06.959045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.170 [2024-10-01 15:11:06.959052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.170 [2024-10-01 15:11:06.959058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.170 [2024-10-01 15:11:06.959065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:57.170 [2024-10-01 15:11:06.960084] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:57.170 [2024-10-01 15:11:06.960095] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:57.170 [2024-10-01 15:11:06.961081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:57.170 [2024-10-01 15:11:06.963008] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:57.170 [2024-10-01 15:11:06.963014] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:57.170 [2024-10-01 15:11:06.963097] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:57.170 [2024-10-01 15:11:06.963107] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:57.170 [2024-10-01 15:11:06.963173] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:57.170 [2024-10-01 15:11:06.965121] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:57.170 are Threshold: 0% 00:13:57.170 Life Percentage Used: 0% 00:13:57.170 Data Units Read: 0 00:13:57.170 Data Units Written: 0 00:13:57.170 Host Read Commands: 0 00:13:57.170 Host Write Commands: 0 00:13:57.170 Controller Busy Time: 0 minutes 00:13:57.170 Power Cycles: 0 00:13:57.170 Power On Hours: 0 hours 00:13:57.170 Unsafe Shutdowns: 0 00:13:57.170 Unrecoverable Media Errors: 0 00:13:57.170 Lifetime Error Log Entries: 0 00:13:57.170 Warning Temperature Time: 0 minutes 00:13:57.170 Critical Temperature Time: 0 minutes 00:13:57.170 00:13:57.170 Number of Queues 00:13:57.170 ================ 00:13:57.170 Number of I/O Submission Queues: 127 00:13:57.170 Number of I/O Completion Queues: 127 00:13:57.170 00:13:57.170 Active Namespaces 00:13:57.170 ================= 00:13:57.170 Namespace ID:1 00:13:57.170 Error Recovery Timeout: Unlimited 00:13:57.170 Command Set Identifier: NVM (00h) 00:13:57.170 Deallocate: Supported 00:13:57.170 Deallocated/Unwritten Error: Not Supported 00:13:57.170 Deallocated Read Value: Unknown 00:13:57.170 Deallocate in Write Zeroes: Not Supported 00:13:57.170 Deallocated Guard Field: 0xFFFF 00:13:57.170 Flush: Supported 00:13:57.170 Reservation: Supported 00:13:57.170 Namespace Sharing Capabilities: Multiple Controllers 00:13:57.170 Size (in LBAs): 131072 (0GiB) 00:13:57.170 Capacity (in LBAs): 131072 (0GiB) 00:13:57.170 Utilization (in LBAs): 131072 (0GiB) 00:13:57.170 NGUID: 7FFD7C583E874DA4950364F8FCDB62FF 00:13:57.170 UUID: 7ffd7c58-3e87-4da4-9503-64f8fcdb62ff 00:13:57.170 Thin Provisioning: Not Supported 00:13:57.170 Per-NS Atomic Units: Yes 00:13:57.170 Atomic Boundary Size (Normal): 0 00:13:57.170 Atomic Boundary Size (PFail): 0 00:13:57.170 Atomic Boundary Offset: 0 00:13:57.170 Maximum Single Source Range Length: 65535 00:13:57.170 Maximum Copy Length: 65535 00:13:57.170 Maximum Source Range Count: 1 00:13:57.170 NGUID/EUI64 Never Reused: No 00:13:57.170 Namespace Write Protected: No 00:13:57.170 Number of LBA Formats: 1 00:13:57.170 Current LBA Format: LBA Format #00 00:13:57.170 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:57.170 00:13:57.170 15:11:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:57.432 [2024-10-01 15:11:07.151617] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:02.723 Initializing NVMe Controllers 00:14:02.723 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:02.723 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:02.723 Initialization complete. Launching workers. 00:14:02.723 ======================================================== 00:14:02.723 Latency(us) 00:14:02.723 Device Information : IOPS MiB/s Average min max 00:14:02.723 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39989.75 156.21 3200.47 847.54 9768.35 00:14:02.723 ======================================================== 00:14:02.723 Total : 39989.75 156.21 3200.47 847.54 9768.35 00:14:02.723 00:14:02.723 [2024-10-01 15:11:12.168902] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:02.723 15:11:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:02.723 [2024-10-01 15:11:12.347724] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:08.013 Initializing NVMe Controllers 00:14:08.013 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:08.013 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:08.013 Initialization complete. Launching workers. 00:14:08.013 ======================================================== 00:14:08.013 Latency(us) 00:14:08.013 Device Information : IOPS MiB/s Average min max 00:14:08.013 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16059.13 62.73 7976.09 6983.48 8980.20 00:14:08.013 ======================================================== 00:14:08.013 Total : 16059.13 62.73 7976.09 6983.48 8980.20 00:14:08.013 00:14:08.013 [2024-10-01 15:11:17.389448] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:08.013 15:11:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:08.013 [2024-10-01 15:11:17.579304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:13.299 [2024-10-01 15:11:22.650162] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:13.299 Initializing NVMe Controllers 00:14:13.299 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:13.299 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:13.299 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:13.299 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:13.299 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:13.299 Initialization complete. Launching workers. 00:14:13.299 Starting thread on core 2 00:14:13.299 Starting thread on core 3 00:14:13.299 Starting thread on core 1 00:14:13.299 15:11:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:13.299 [2024-10-01 15:11:22.918435] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:16.596 [2024-10-01 15:11:26.025129] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:16.596 Initializing NVMe Controllers 00:14:16.596 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:16.596 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:16.596 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:16.596 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:16.596 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:16.596 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:16.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:16.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:16.596 Initialization complete. Launching workers. 00:14:16.596 Starting thread on core 1 with urgent priority queue 00:14:16.596 Starting thread on core 2 with urgent priority queue 00:14:16.596 Starting thread on core 3 with urgent priority queue 00:14:16.596 Starting thread on core 0 with urgent priority queue 00:14:16.596 SPDK bdev Controller (SPDK1 ) core 0: 10587.00 IO/s 9.45 secs/100000 ios 00:14:16.596 SPDK bdev Controller (SPDK1 ) core 1: 11241.00 IO/s 8.90 secs/100000 ios 00:14:16.596 SPDK bdev Controller (SPDK1 ) core 2: 9557.00 IO/s 10.46 secs/100000 ios 00:14:16.596 SPDK bdev Controller (SPDK1 ) core 3: 11974.67 IO/s 8.35 secs/100000 ios 00:14:16.596 ======================================================== 00:14:16.596 00:14:16.596 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:16.596 [2024-10-01 15:11:26.287434] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:16.596 Initializing NVMe Controllers 00:14:16.596 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:16.596 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:16.596 Namespace ID: 1 size: 0GB 00:14:16.596 Initialization complete. 00:14:16.596 INFO: using host memory buffer for IO 00:14:16.596 Hello world! 00:14:16.596 [2024-10-01 15:11:26.319597] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:16.596 15:11:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:16.856 [2024-10-01 15:11:26.583456] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:17.797 Initializing NVMe Controllers 00:14:17.797 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:17.797 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:17.797 Initialization complete. Launching workers. 00:14:17.797 submit (in ns) avg, min, max = 8565.5, 3893.3, 4028991.7 00:14:17.797 complete (in ns) avg, min, max = 17202.1, 2395.8, 4995539.2 00:14:17.797 00:14:17.797 Submit histogram 00:14:17.797 ================ 00:14:17.797 Range in us Cumulative Count 00:14:17.797 3.893 - 3.920: 1.4671% ( 280) 00:14:17.797 3.920 - 3.947: 7.2937% ( 1112) 00:14:17.797 3.947 - 3.973: 19.4865% ( 2327) 00:14:17.797 3.973 - 4.000: 32.6487% ( 2512) 00:14:17.797 4.000 - 4.027: 43.3377% ( 2040) 00:14:17.797 4.027 - 4.053: 53.6442% ( 1967) 00:14:17.797 4.053 - 4.080: 69.8926% ( 3101) 00:14:17.797 4.080 - 4.107: 83.6835% ( 2632) 00:14:17.797 4.107 - 4.133: 93.3351% ( 1842) 00:14:17.797 4.133 - 4.160: 97.5373% ( 802) 00:14:17.797 4.160 - 4.187: 98.9206% ( 264) 00:14:17.797 4.187 - 4.213: 99.3765% ( 87) 00:14:17.797 4.213 - 4.240: 99.4551% ( 15) 00:14:17.797 4.240 - 4.267: 99.4655% ( 2) 00:14:17.797 4.267 - 4.293: 99.4708% ( 1) 00:14:17.797 4.347 - 4.373: 99.4760% ( 1) 00:14:17.797 4.533 - 4.560: 99.4813% ( 1) 00:14:17.797 4.560 - 4.587: 99.4917% ( 2) 00:14:17.797 4.640 - 4.667: 99.4970% ( 1) 00:14:17.797 4.747 - 4.773: 99.5022% ( 1) 00:14:17.797 4.827 - 4.853: 99.5075% ( 1) 00:14:17.797 4.907 - 4.933: 99.5127% ( 1) 00:14:17.797 5.013 - 5.040: 99.5179% ( 1) 00:14:17.797 5.067 - 5.093: 99.5232% ( 1) 00:14:17.797 5.200 - 5.227: 99.5284% ( 1) 00:14:17.797 5.440 - 5.467: 99.5389% ( 2) 00:14:17.797 5.547 - 5.573: 99.5441% ( 1) 00:14:17.797 5.840 - 5.867: 99.5494% ( 1) 00:14:17.797 5.893 - 5.920: 99.5546% ( 1) 00:14:17.797 6.000 - 6.027: 99.5651% ( 2) 00:14:17.797 6.027 - 6.053: 99.5756% ( 2) 00:14:17.797 6.053 - 6.080: 99.5808% ( 1) 00:14:17.797 6.080 - 6.107: 99.5861% ( 1) 00:14:17.797 6.107 - 6.133: 99.5913% ( 1) 00:14:17.797 6.160 - 6.187: 99.6070% ( 3) 00:14:17.797 6.187 - 6.213: 99.6123% ( 1) 00:14:17.797 6.240 - 6.267: 99.6175% ( 1) 00:14:17.797 6.267 - 6.293: 99.6227% ( 1) 00:14:17.797 6.293 - 6.320: 99.6280% ( 1) 00:14:17.797 6.320 - 6.347: 99.6332% ( 1) 00:14:17.797 6.373 - 6.400: 99.6437% ( 2) 00:14:17.797 6.453 - 6.480: 99.6594% ( 3) 00:14:17.797 6.507 - 6.533: 99.6699% ( 2) 00:14:17.797 6.533 - 6.560: 99.6751% ( 1) 00:14:17.797 6.587 - 6.613: 99.6804% ( 1) 00:14:17.797 6.613 - 6.640: 99.6909% ( 2) 00:14:17.797 6.640 - 6.667: 99.6961% ( 1) 00:14:17.797 6.693 - 6.720: 99.7013% ( 1) 00:14:17.797 6.720 - 6.747: 99.7066% ( 1) 00:14:17.797 6.747 - 6.773: 99.7118% ( 1) 00:14:17.797 6.800 - 6.827: 99.7171% ( 1) 00:14:17.797 6.827 - 6.880: 99.7223% ( 1) 00:14:17.797 6.880 - 6.933: 99.7328% ( 2) 00:14:17.797 6.933 - 6.987: 99.7380% ( 1) 00:14:17.797 7.040 - 7.093: 99.7485% ( 2) 00:14:17.797 7.093 - 7.147: 99.7590% ( 2) 00:14:17.797 7.147 - 7.200: 99.7695% ( 2) 00:14:17.797 7.200 - 7.253: 99.7747% ( 1) 00:14:17.797 7.307 - 7.360: 99.7799% ( 1) 00:14:17.797 7.360 - 7.413: 99.7904% ( 2) 00:14:17.797 7.413 - 7.467: 99.8061% ( 3) 00:14:17.797 7.467 - 7.520: 99.8166% ( 2) 00:14:17.797 7.520 - 7.573: 99.8271% ( 2) 00:14:17.797 7.573 - 7.627: 99.8323% ( 1) 00:14:17.797 7.627 - 7.680: 99.8428% ( 2) 00:14:17.797 7.733 - 7.787: 99.8480% ( 1) 00:14:17.797 7.787 - 7.840: 99.8533% ( 1) 00:14:17.797 8.000 - 8.053: 99.8585% ( 1) 00:14:17.797 8.533 - 8.587: 99.8638% ( 1) 00:14:17.797 8.693 - 8.747: 99.8690% ( 1) 00:14:17.797 12.427 - 12.480: 99.8742% ( 1) 00:14:17.797 45.440 - 45.653: 99.8795% ( 1) 00:14:17.797 147.627 - 148.480: 99.8847% ( 1) 00:14:17.797 3031.040 - 3044.693: 99.8900% ( 1) 00:14:17.797 3058.347 - 3072.000: 99.8952% ( 1) 00:14:17.797 3986.773 - 4014.080: 99.9948% ( 19) 00:14:17.797 4014.080 - 4041.387: 100.0000% ( 1) 00:14:17.797 00:14:17.797 Complete histogram 00:14:17.797 ================== 00:14:17.797 Range in us Cumulative Count 00:14:17.797 2.387 - [2024-10-01 15:11:27.606952] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:18.058 2.400: 0.0314% ( 6) 00:14:18.058 2.400 - 2.413: 0.1520% ( 23) 00:14:18.058 2.413 - 2.427: 0.6026% ( 86) 00:14:18.058 2.427 - 2.440: 0.9798% ( 72) 00:14:18.058 2.440 - 2.453: 9.6149% ( 1648) 00:14:18.058 2.453 - 2.467: 16.7357% ( 1359) 00:14:18.058 2.467 - 2.480: 50.6733% ( 6477) 00:14:18.058 2.480 - 2.493: 67.7548% ( 3260) 00:14:18.058 2.493 - 2.507: 74.6974% ( 1325) 00:14:18.058 2.507 - 2.520: 80.0524% ( 1022) 00:14:18.058 2.520 - 2.533: 83.6468% ( 686) 00:14:18.058 2.533 - 2.547: 87.3566% ( 708) 00:14:18.058 2.547 - 2.560: 92.6120% ( 1003) 00:14:18.058 2.560 - 2.573: 96.7199% ( 784) 00:14:18.058 2.573 - 2.587: 98.3390% ( 309) 00:14:18.058 2.587 - 2.600: 99.0621% ( 138) 00:14:18.058 2.600 - 2.613: 99.3870% ( 62) 00:14:18.058 2.613 - 2.627: 99.4236% ( 7) 00:14:18.058 2.627 - 2.640: 99.4289% ( 1) 00:14:18.058 2.720 - 2.733: 99.4341% ( 1) 00:14:18.058 2.787 - 2.800: 99.4394% ( 1) 00:14:18.058 3.147 - 3.160: 99.4446% ( 1) 00:14:18.058 4.293 - 4.320: 99.4498% ( 1) 00:14:18.058 4.373 - 4.400: 99.4603% ( 2) 00:14:18.058 4.427 - 4.453: 99.4708% ( 2) 00:14:18.058 4.480 - 4.507: 99.4760% ( 1) 00:14:18.058 4.507 - 4.533: 99.4813% ( 1) 00:14:18.058 4.587 - 4.613: 99.4865% ( 1) 00:14:18.058 4.667 - 4.693: 99.4917% ( 1) 00:14:18.058 4.693 - 4.720: 99.4970% ( 1) 00:14:18.058 4.747 - 4.773: 99.5022% ( 1) 00:14:18.058 4.827 - 4.853: 99.5075% ( 1) 00:14:18.058 4.853 - 4.880: 99.5127% ( 1) 00:14:18.058 4.880 - 4.907: 99.5179% ( 1) 00:14:18.058 4.907 - 4.933: 99.5232% ( 1) 00:14:18.058 4.987 - 5.013: 99.5284% ( 1) 00:14:18.058 5.067 - 5.093: 99.5337% ( 1) 00:14:18.058 5.093 - 5.120: 99.5389% ( 1) 00:14:18.058 5.173 - 5.200: 99.5494% ( 2) 00:14:18.058 5.333 - 5.360: 99.5546% ( 1) 00:14:18.058 5.360 - 5.387: 99.5703% ( 3) 00:14:18.058 5.387 - 5.413: 99.5756% ( 1) 00:14:18.058 5.467 - 5.493: 99.5861% ( 2) 00:14:18.058 5.493 - 5.520: 99.5913% ( 1) 00:14:18.058 5.680 - 5.707: 99.5965% ( 1) 00:14:18.058 5.760 - 5.787: 99.6070% ( 2) 00:14:18.058 5.867 - 5.893: 99.6123% ( 1) 00:14:18.058 5.947 - 5.973: 99.6175% ( 1) 00:14:18.058 10.187 - 10.240: 99.6227% ( 1) 00:14:18.058 10.933 - 10.987: 99.6280% ( 1) 00:14:18.058 13.067 - 13.120: 99.6332% ( 1) 00:14:18.058 3986.773 - 4014.080: 99.9948% ( 69) 00:14:18.058 4969.813 - 4997.120: 100.0000% ( 1) 00:14:18.058 00:14:18.058 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:18.058 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:18.058 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:18.058 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:18.058 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:18.058 [ 00:14:18.058 { 00:14:18.058 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:18.058 "subtype": "Discovery", 00:14:18.058 "listen_addresses": [], 00:14:18.058 "allow_any_host": true, 00:14:18.058 "hosts": [] 00:14:18.058 }, 00:14:18.058 { 00:14:18.058 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:18.058 "subtype": "NVMe", 00:14:18.058 "listen_addresses": [ 00:14:18.058 { 00:14:18.058 "trtype": "VFIOUSER", 00:14:18.058 "adrfam": "IPv4", 00:14:18.058 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:18.058 "trsvcid": "0" 00:14:18.058 } 00:14:18.058 ], 00:14:18.058 "allow_any_host": true, 00:14:18.058 "hosts": [], 00:14:18.058 "serial_number": "SPDK1", 00:14:18.058 "model_number": "SPDK bdev Controller", 00:14:18.058 "max_namespaces": 32, 00:14:18.058 "min_cntlid": 1, 00:14:18.058 "max_cntlid": 65519, 00:14:18.058 "namespaces": [ 00:14:18.058 { 00:14:18.058 "nsid": 1, 00:14:18.058 "bdev_name": "Malloc1", 00:14:18.058 "name": "Malloc1", 00:14:18.058 "nguid": "7FFD7C583E874DA4950364F8FCDB62FF", 00:14:18.058 "uuid": "7ffd7c58-3e87-4da4-9503-64f8fcdb62ff" 00:14:18.058 } 00:14:18.058 ] 00:14:18.058 }, 00:14:18.058 { 00:14:18.058 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:18.058 "subtype": "NVMe", 00:14:18.058 "listen_addresses": [ 00:14:18.058 { 00:14:18.058 "trtype": "VFIOUSER", 00:14:18.058 "adrfam": "IPv4", 00:14:18.058 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:18.058 "trsvcid": "0" 00:14:18.058 } 00:14:18.058 ], 00:14:18.058 "allow_any_host": true, 00:14:18.058 "hosts": [], 00:14:18.058 "serial_number": "SPDK2", 00:14:18.059 "model_number": "SPDK bdev Controller", 00:14:18.059 "max_namespaces": 32, 00:14:18.059 "min_cntlid": 1, 00:14:18.059 "max_cntlid": 65519, 00:14:18.059 "namespaces": [ 00:14:18.059 { 00:14:18.059 "nsid": 1, 00:14:18.059 "bdev_name": "Malloc2", 00:14:18.059 "name": "Malloc2", 00:14:18.059 "nguid": "259BBCB932FB407DBC3842082D0FAAB1", 00:14:18.059 "uuid": "259bbcb9-32fb-407d-bc38-42082d0faab1" 00:14:18.059 } 00:14:18.059 ] 00:14:18.059 } 00:14:18.059 ] 00:14:18.059 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:18.059 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3916736 00:14:18.059 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:18.059 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:18.059 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:18.059 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:18.059 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:18.059 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:18.059 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:18.059 15:11:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:18.319 [2024-10-01 15:11:28.013411] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:18.319 Malloc3 00:14:18.319 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:18.579 [2024-10-01 15:11:28.191554] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:18.579 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:18.579 Asynchronous Event Request test 00:14:18.579 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:18.579 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:18.579 Registering asynchronous event callbacks... 00:14:18.579 Starting namespace attribute notice tests for all controllers... 00:14:18.579 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:18.579 aer_cb - Changed Namespace 00:14:18.579 Cleaning up... 00:14:18.579 [ 00:14:18.579 { 00:14:18.579 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:18.579 "subtype": "Discovery", 00:14:18.579 "listen_addresses": [], 00:14:18.579 "allow_any_host": true, 00:14:18.579 "hosts": [] 00:14:18.579 }, 00:14:18.579 { 00:14:18.579 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:18.579 "subtype": "NVMe", 00:14:18.579 "listen_addresses": [ 00:14:18.579 { 00:14:18.579 "trtype": "VFIOUSER", 00:14:18.579 "adrfam": "IPv4", 00:14:18.579 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:18.579 "trsvcid": "0" 00:14:18.579 } 00:14:18.579 ], 00:14:18.579 "allow_any_host": true, 00:14:18.579 "hosts": [], 00:14:18.579 "serial_number": "SPDK1", 00:14:18.579 "model_number": "SPDK bdev Controller", 00:14:18.579 "max_namespaces": 32, 00:14:18.579 "min_cntlid": 1, 00:14:18.579 "max_cntlid": 65519, 00:14:18.579 "namespaces": [ 00:14:18.579 { 00:14:18.579 "nsid": 1, 00:14:18.579 "bdev_name": "Malloc1", 00:14:18.579 "name": "Malloc1", 00:14:18.579 "nguid": "7FFD7C583E874DA4950364F8FCDB62FF", 00:14:18.579 "uuid": "7ffd7c58-3e87-4da4-9503-64f8fcdb62ff" 00:14:18.579 }, 00:14:18.579 { 00:14:18.579 "nsid": 2, 00:14:18.579 "bdev_name": "Malloc3", 00:14:18.580 "name": "Malloc3", 00:14:18.580 "nguid": "CDB037E0EA9841A787E45A7718F5F901", 00:14:18.580 "uuid": "cdb037e0-ea98-41a7-87e4-5a7718f5f901" 00:14:18.580 } 00:14:18.580 ] 00:14:18.580 }, 00:14:18.580 { 00:14:18.580 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:18.580 "subtype": "NVMe", 00:14:18.580 "listen_addresses": [ 00:14:18.580 { 00:14:18.580 "trtype": "VFIOUSER", 00:14:18.580 "adrfam": "IPv4", 00:14:18.580 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:18.580 "trsvcid": "0" 00:14:18.580 } 00:14:18.580 ], 00:14:18.580 "allow_any_host": true, 00:14:18.580 "hosts": [], 00:14:18.580 "serial_number": "SPDK2", 00:14:18.580 "model_number": "SPDK bdev Controller", 00:14:18.580 "max_namespaces": 32, 00:14:18.580 "min_cntlid": 1, 00:14:18.580 "max_cntlid": 65519, 00:14:18.580 "namespaces": [ 00:14:18.580 { 00:14:18.580 "nsid": 1, 00:14:18.580 "bdev_name": "Malloc2", 00:14:18.580 "name": "Malloc2", 00:14:18.580 "nguid": "259BBCB932FB407DBC3842082D0FAAB1", 00:14:18.580 "uuid": "259bbcb9-32fb-407d-bc38-42082d0faab1" 00:14:18.580 } 00:14:18.580 ] 00:14:18.580 } 00:14:18.580 ] 00:14:18.580 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3916736 00:14:18.580 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:18.580 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:18.580 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:18.580 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:18.580 [2024-10-01 15:11:28.428182] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:14:18.580 [2024-10-01 15:11:28.428224] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916810 ] 00:14:18.843 [2024-10-01 15:11:28.462161] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:18.843 [2024-10-01 15:11:28.464402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:18.843 [2024-10-01 15:11:28.464427] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f632971a000 00:14:18.843 [2024-10-01 15:11:28.465407] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.843 [2024-10-01 15:11:28.466419] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.843 [2024-10-01 15:11:28.467421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.843 [2024-10-01 15:11:28.468428] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:18.843 [2024-10-01 15:11:28.469441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:18.843 [2024-10-01 15:11:28.470441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.843 [2024-10-01 15:11:28.471447] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:18.843 [2024-10-01 15:11:28.472454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.843 [2024-10-01 15:11:28.473461] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:18.843 [2024-10-01 15:11:28.473472] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f632970f000 00:14:18.843 [2024-10-01 15:11:28.474797] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:18.843 [2024-10-01 15:11:28.494159] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:18.843 [2024-10-01 15:11:28.494184] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:18.843 [2024-10-01 15:11:28.499281] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:18.843 [2024-10-01 15:11:28.499326] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:18.843 [2024-10-01 15:11:28.499407] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:18.843 [2024-10-01 15:11:28.499423] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:18.843 [2024-10-01 15:11:28.499428] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:18.843 [2024-10-01 15:11:28.500286] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:18.843 [2024-10-01 15:11:28.500296] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:18.843 [2024-10-01 15:11:28.500303] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:18.843 [2024-10-01 15:11:28.501289] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:18.843 [2024-10-01 15:11:28.501298] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:18.843 [2024-10-01 15:11:28.501306] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:18.843 [2024-10-01 15:11:28.502298] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:18.843 [2024-10-01 15:11:28.502308] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:18.843 [2024-10-01 15:11:28.503300] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:18.843 [2024-10-01 15:11:28.503313] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:18.843 [2024-10-01 15:11:28.503318] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:18.843 [2024-10-01 15:11:28.503325] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:18.843 [2024-10-01 15:11:28.503431] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:18.843 [2024-10-01 15:11:28.503436] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:18.843 [2024-10-01 15:11:28.503441] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:18.843 [2024-10-01 15:11:28.504313] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:18.843 [2024-10-01 15:11:28.505318] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:18.843 [2024-10-01 15:11:28.506329] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:18.843 [2024-10-01 15:11:28.507332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:18.843 [2024-10-01 15:11:28.507372] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:18.843 [2024-10-01 15:11:28.508350] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:18.843 [2024-10-01 15:11:28.508359] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:18.843 [2024-10-01 15:11:28.508364] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:18.843 [2024-10-01 15:11:28.508386] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:18.843 [2024-10-01 15:11:28.508397] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:18.843 [2024-10-01 15:11:28.508409] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:18.843 [2024-10-01 15:11:28.508415] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:18.843 [2024-10-01 15:11:28.508418] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.843 [2024-10-01 15:11:28.508431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:18.843 [2024-10-01 15:11:28.515004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:18.843 [2024-10-01 15:11:28.515016] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:18.843 [2024-10-01 15:11:28.515021] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:18.843 [2024-10-01 15:11:28.515026] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:18.843 [2024-10-01 15:11:28.515031] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:18.843 [2024-10-01 15:11:28.515036] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:18.843 [2024-10-01 15:11:28.515043] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:18.843 [2024-10-01 15:11:28.515048] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.515055] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.515065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:18.844 [2024-10-01 15:11:28.523003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:18.844 [2024-10-01 15:11:28.523016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.844 [2024-10-01 15:11:28.523025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.844 [2024-10-01 15:11:28.523034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.844 [2024-10-01 15:11:28.523042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.844 [2024-10-01 15:11:28.523047] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.523057] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.523066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:18.844 [2024-10-01 15:11:28.531003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:18.844 [2024-10-01 15:11:28.531011] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:18.844 [2024-10-01 15:11:28.531016] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.531023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.531031] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.531040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:18.844 [2024-10-01 15:11:28.539001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:18.844 [2024-10-01 15:11:28.539067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.539075] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.539083] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:18.844 [2024-10-01 15:11:28.539088] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:18.844 [2024-10-01 15:11:28.539091] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.844 [2024-10-01 15:11:28.539097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:18.844 [2024-10-01 15:11:28.547002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:18.844 [2024-10-01 15:11:28.547014] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:18.844 [2024-10-01 15:11:28.547023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.547031] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.547038] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:18.844 [2024-10-01 15:11:28.547043] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:18.844 [2024-10-01 15:11:28.547046] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.844 [2024-10-01 15:11:28.547052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:18.844 [2024-10-01 15:11:28.555002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:18.844 [2024-10-01 15:11:28.555016] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.555025] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.555032] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:18.844 [2024-10-01 15:11:28.555037] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:18.844 [2024-10-01 15:11:28.555040] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.844 [2024-10-01 15:11:28.555046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:18.844 [2024-10-01 15:11:28.563005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:18.844 [2024-10-01 15:11:28.563017] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.563024] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.563032] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.563037] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.563042] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.563048] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.563053] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:18.844 [2024-10-01 15:11:28.563057] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:18.844 [2024-10-01 15:11:28.563062] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:18.844 [2024-10-01 15:11:28.563084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:18.844 [2024-10-01 15:11:28.571002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:18.844 [2024-10-01 15:11:28.571017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:18.844 [2024-10-01 15:11:28.579004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:18.844 [2024-10-01 15:11:28.579017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:18.844 [2024-10-01 15:11:28.587001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:18.844 [2024-10-01 15:11:28.587015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:18.844 [2024-10-01 15:11:28.595004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:18.844 [2024-10-01 15:11:28.595021] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:18.844 [2024-10-01 15:11:28.595026] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:18.844 [2024-10-01 15:11:28.595029] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:18.844 [2024-10-01 15:11:28.595033] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:18.844 [2024-10-01 15:11:28.595036] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:18.844 [2024-10-01 15:11:28.595043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:18.844 [2024-10-01 15:11:28.595051] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:18.844 [2024-10-01 15:11:28.595055] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:18.844 [2024-10-01 15:11:28.595059] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.844 [2024-10-01 15:11:28.595064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:18.844 [2024-10-01 15:11:28.595072] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:18.844 [2024-10-01 15:11:28.595076] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:18.844 [2024-10-01 15:11:28.595079] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.844 [2024-10-01 15:11:28.595085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:18.844 [2024-10-01 15:11:28.595093] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:18.844 [2024-10-01 15:11:28.595098] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:18.844 [2024-10-01 15:11:28.595101] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.844 [2024-10-01 15:11:28.595107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:18.844 [2024-10-01 15:11:28.603004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:18.844 [2024-10-01 15:11:28.603019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:18.844 [2024-10-01 15:11:28.603030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:18.844 [2024-10-01 15:11:28.603040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:18.844 ===================================================== 00:14:18.844 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:18.844 ===================================================== 00:14:18.844 Controller Capabilities/Features 00:14:18.844 ================================ 00:14:18.844 Vendor ID: 4e58 00:14:18.844 Subsystem Vendor ID: 4e58 00:14:18.844 Serial Number: SPDK2 00:14:18.844 Model Number: SPDK bdev Controller 00:14:18.844 Firmware Version: 25.01 00:14:18.845 Recommended Arb Burst: 6 00:14:18.845 IEEE OUI Identifier: 8d 6b 50 00:14:18.845 Multi-path I/O 00:14:18.845 May have multiple subsystem ports: Yes 00:14:18.845 May have multiple controllers: Yes 00:14:18.845 Associated with SR-IOV VF: No 00:14:18.845 Max Data Transfer Size: 131072 00:14:18.845 Max Number of Namespaces: 32 00:14:18.845 Max Number of I/O Queues: 127 00:14:18.845 NVMe Specification Version (VS): 1.3 00:14:18.845 NVMe Specification Version (Identify): 1.3 00:14:18.845 Maximum Queue Entries: 256 00:14:18.845 Contiguous Queues Required: Yes 00:14:18.845 Arbitration Mechanisms Supported 00:14:18.845 Weighted Round Robin: Not Supported 00:14:18.845 Vendor Specific: Not Supported 00:14:18.845 Reset Timeout: 15000 ms 00:14:18.845 Doorbell Stride: 4 bytes 00:14:18.845 NVM Subsystem Reset: Not Supported 00:14:18.845 Command Sets Supported 00:14:18.845 NVM Command Set: Supported 00:14:18.845 Boot Partition: Not Supported 00:14:18.845 Memory Page Size Minimum: 4096 bytes 00:14:18.845 Memory Page Size Maximum: 4096 bytes 00:14:18.845 Persistent Memory Region: Not Supported 00:14:18.845 Optional Asynchronous Events Supported 00:14:18.845 Namespace Attribute Notices: Supported 00:14:18.845 Firmware Activation Notices: Not Supported 00:14:18.845 ANA Change Notices: Not Supported 00:14:18.845 PLE Aggregate Log Change Notices: Not Supported 00:14:18.845 LBA Status Info Alert Notices: Not Supported 00:14:18.845 EGE Aggregate Log Change Notices: Not Supported 00:14:18.845 Normal NVM Subsystem Shutdown event: Not Supported 00:14:18.845 Zone Descriptor Change Notices: Not Supported 00:14:18.845 Discovery Log Change Notices: Not Supported 00:14:18.845 Controller Attributes 00:14:18.845 128-bit Host Identifier: Supported 00:14:18.845 Non-Operational Permissive Mode: Not Supported 00:14:18.845 NVM Sets: Not Supported 00:14:18.845 Read Recovery Levels: Not Supported 00:14:18.845 Endurance Groups: Not Supported 00:14:18.845 Predictable Latency Mode: Not Supported 00:14:18.845 Traffic Based Keep ALive: Not Supported 00:14:18.845 Namespace Granularity: Not Supported 00:14:18.845 SQ Associations: Not Supported 00:14:18.845 UUID List: Not Supported 00:14:18.845 Multi-Domain Subsystem: Not Supported 00:14:18.845 Fixed Capacity Management: Not Supported 00:14:18.845 Variable Capacity Management: Not Supported 00:14:18.845 Delete Endurance Group: Not Supported 00:14:18.845 Delete NVM Set: Not Supported 00:14:18.845 Extended LBA Formats Supported: Not Supported 00:14:18.845 Flexible Data Placement Supported: Not Supported 00:14:18.845 00:14:18.845 Controller Memory Buffer Support 00:14:18.845 ================================ 00:14:18.845 Supported: No 00:14:18.845 00:14:18.845 Persistent Memory Region Support 00:14:18.845 ================================ 00:14:18.845 Supported: No 00:14:18.845 00:14:18.845 Admin Command Set Attributes 00:14:18.845 ============================ 00:14:18.845 Security Send/Receive: Not Supported 00:14:18.845 Format NVM: Not Supported 00:14:18.845 Firmware Activate/Download: Not Supported 00:14:18.845 Namespace Management: Not Supported 00:14:18.845 Device Self-Test: Not Supported 00:14:18.845 Directives: Not Supported 00:14:18.845 NVMe-MI: Not Supported 00:14:18.845 Virtualization Management: Not Supported 00:14:18.845 Doorbell Buffer Config: Not Supported 00:14:18.845 Get LBA Status Capability: Not Supported 00:14:18.845 Command & Feature Lockdown Capability: Not Supported 00:14:18.845 Abort Command Limit: 4 00:14:18.845 Async Event Request Limit: 4 00:14:18.845 Number of Firmware Slots: N/A 00:14:18.845 Firmware Slot 1 Read-Only: N/A 00:14:18.845 Firmware Activation Without Reset: N/A 00:14:18.845 Multiple Update Detection Support: N/A 00:14:18.845 Firmware Update Granularity: No Information Provided 00:14:18.845 Per-Namespace SMART Log: No 00:14:18.845 Asymmetric Namespace Access Log Page: Not Supported 00:14:18.845 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:18.845 Command Effects Log Page: Supported 00:14:18.845 Get Log Page Extended Data: Supported 00:14:18.845 Telemetry Log Pages: Not Supported 00:14:18.845 Persistent Event Log Pages: Not Supported 00:14:18.845 Supported Log Pages Log Page: May Support 00:14:18.845 Commands Supported & Effects Log Page: Not Supported 00:14:18.845 Feature Identifiers & Effects Log Page:May Support 00:14:18.845 NVMe-MI Commands & Effects Log Page: May Support 00:14:18.845 Data Area 4 for Telemetry Log: Not Supported 00:14:18.845 Error Log Page Entries Supported: 128 00:14:18.845 Keep Alive: Supported 00:14:18.845 Keep Alive Granularity: 10000 ms 00:14:18.845 00:14:18.845 NVM Command Set Attributes 00:14:18.845 ========================== 00:14:18.845 Submission Queue Entry Size 00:14:18.845 Max: 64 00:14:18.845 Min: 64 00:14:18.845 Completion Queue Entry Size 00:14:18.845 Max: 16 00:14:18.845 Min: 16 00:14:18.845 Number of Namespaces: 32 00:14:18.845 Compare Command: Supported 00:14:18.845 Write Uncorrectable Command: Not Supported 00:14:18.845 Dataset Management Command: Supported 00:14:18.845 Write Zeroes Command: Supported 00:14:18.845 Set Features Save Field: Not Supported 00:14:18.845 Reservations: Not Supported 00:14:18.845 Timestamp: Not Supported 00:14:18.845 Copy: Supported 00:14:18.845 Volatile Write Cache: Present 00:14:18.845 Atomic Write Unit (Normal): 1 00:14:18.845 Atomic Write Unit (PFail): 1 00:14:18.845 Atomic Compare & Write Unit: 1 00:14:18.845 Fused Compare & Write: Supported 00:14:18.845 Scatter-Gather List 00:14:18.845 SGL Command Set: Supported (Dword aligned) 00:14:18.845 SGL Keyed: Not Supported 00:14:18.845 SGL Bit Bucket Descriptor: Not Supported 00:14:18.845 SGL Metadata Pointer: Not Supported 00:14:18.845 Oversized SGL: Not Supported 00:14:18.845 SGL Metadata Address: Not Supported 00:14:18.845 SGL Offset: Not Supported 00:14:18.845 Transport SGL Data Block: Not Supported 00:14:18.845 Replay Protected Memory Block: Not Supported 00:14:18.845 00:14:18.845 Firmware Slot Information 00:14:18.845 ========================= 00:14:18.845 Active slot: 1 00:14:18.845 Slot 1 Firmware Revision: 25.01 00:14:18.845 00:14:18.845 00:14:18.845 Commands Supported and Effects 00:14:18.845 ============================== 00:14:18.845 Admin Commands 00:14:18.845 -------------- 00:14:18.845 Get Log Page (02h): Supported 00:14:18.845 Identify (06h): Supported 00:14:18.845 Abort (08h): Supported 00:14:18.845 Set Features (09h): Supported 00:14:18.845 Get Features (0Ah): Supported 00:14:18.845 Asynchronous Event Request (0Ch): Supported 00:14:18.845 Keep Alive (18h): Supported 00:14:18.845 I/O Commands 00:14:18.845 ------------ 00:14:18.845 Flush (00h): Supported LBA-Change 00:14:18.845 Write (01h): Supported LBA-Change 00:14:18.845 Read (02h): Supported 00:14:18.845 Compare (05h): Supported 00:14:18.845 Write Zeroes (08h): Supported LBA-Change 00:14:18.845 Dataset Management (09h): Supported LBA-Change 00:14:18.845 Copy (19h): Supported LBA-Change 00:14:18.845 00:14:18.845 Error Log 00:14:18.845 ========= 00:14:18.845 00:14:18.845 Arbitration 00:14:18.845 =========== 00:14:18.845 Arbitration Burst: 1 00:14:18.845 00:14:18.845 Power Management 00:14:18.845 ================ 00:14:18.845 Number of Power States: 1 00:14:18.845 Current Power State: Power State #0 00:14:18.845 Power State #0: 00:14:18.845 Max Power: 0.00 W 00:14:18.845 Non-Operational State: Operational 00:14:18.845 Entry Latency: Not Reported 00:14:18.845 Exit Latency: Not Reported 00:14:18.845 Relative Read Throughput: 0 00:14:18.845 Relative Read Latency: 0 00:14:18.845 Relative Write Throughput: 0 00:14:18.845 Relative Write Latency: 0 00:14:18.845 Idle Power: Not Reported 00:14:18.845 Active Power: Not Reported 00:14:18.845 Non-Operational Permissive Mode: Not Supported 00:14:18.845 00:14:18.845 Health Information 00:14:18.845 ================== 00:14:18.845 Critical Warnings: 00:14:18.845 Available Spare Space: OK 00:14:18.845 Temperature: OK 00:14:18.845 Device Reliability: OK 00:14:18.845 Read Only: No 00:14:18.845 Volatile Memory Backup: OK 00:14:18.845 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:18.845 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:18.845 Available Spare: 0% 00:14:18.845 Available Sp[2024-10-01 15:11:28.603138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:18.845 [2024-10-01 15:11:28.611003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:18.845 [2024-10-01 15:11:28.611037] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:18.845 [2024-10-01 15:11:28.611047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.845 [2024-10-01 15:11:28.611053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.845 [2024-10-01 15:11:28.611060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.846 [2024-10-01 15:11:28.611066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.846 [2024-10-01 15:11:28.611104] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:18.846 [2024-10-01 15:11:28.611114] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:18.846 [2024-10-01 15:11:28.612114] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:18.846 [2024-10-01 15:11:28.612164] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:18.846 [2024-10-01 15:11:28.612171] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:18.846 [2024-10-01 15:11:28.613118] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:18.846 [2024-10-01 15:11:28.613130] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:18.846 [2024-10-01 15:11:28.613185] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:18.846 [2024-10-01 15:11:28.614559] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:18.846 are Threshold: 0% 00:14:18.846 Life Percentage Used: 0% 00:14:18.846 Data Units Read: 0 00:14:18.846 Data Units Written: 0 00:14:18.846 Host Read Commands: 0 00:14:18.846 Host Write Commands: 0 00:14:18.846 Controller Busy Time: 0 minutes 00:14:18.846 Power Cycles: 0 00:14:18.846 Power On Hours: 0 hours 00:14:18.846 Unsafe Shutdowns: 0 00:14:18.846 Unrecoverable Media Errors: 0 00:14:18.846 Lifetime Error Log Entries: 0 00:14:18.846 Warning Temperature Time: 0 minutes 00:14:18.846 Critical Temperature Time: 0 minutes 00:14:18.846 00:14:18.846 Number of Queues 00:14:18.846 ================ 00:14:18.846 Number of I/O Submission Queues: 127 00:14:18.846 Number of I/O Completion Queues: 127 00:14:18.846 00:14:18.846 Active Namespaces 00:14:18.846 ================= 00:14:18.846 Namespace ID:1 00:14:18.846 Error Recovery Timeout: Unlimited 00:14:18.846 Command Set Identifier: NVM (00h) 00:14:18.846 Deallocate: Supported 00:14:18.846 Deallocated/Unwritten Error: Not Supported 00:14:18.846 Deallocated Read Value: Unknown 00:14:18.846 Deallocate in Write Zeroes: Not Supported 00:14:18.846 Deallocated Guard Field: 0xFFFF 00:14:18.846 Flush: Supported 00:14:18.846 Reservation: Supported 00:14:18.846 Namespace Sharing Capabilities: Multiple Controllers 00:14:18.846 Size (in LBAs): 131072 (0GiB) 00:14:18.846 Capacity (in LBAs): 131072 (0GiB) 00:14:18.846 Utilization (in LBAs): 131072 (0GiB) 00:14:18.846 NGUID: 259BBCB932FB407DBC3842082D0FAAB1 00:14:18.846 UUID: 259bbcb9-32fb-407d-bc38-42082d0faab1 00:14:18.846 Thin Provisioning: Not Supported 00:14:18.846 Per-NS Atomic Units: Yes 00:14:18.846 Atomic Boundary Size (Normal): 0 00:14:18.846 Atomic Boundary Size (PFail): 0 00:14:18.846 Atomic Boundary Offset: 0 00:14:18.846 Maximum Single Source Range Length: 65535 00:14:18.846 Maximum Copy Length: 65535 00:14:18.846 Maximum Source Range Count: 1 00:14:18.846 NGUID/EUI64 Never Reused: No 00:14:18.846 Namespace Write Protected: No 00:14:18.846 Number of LBA Formats: 1 00:14:18.846 Current LBA Format: LBA Format #00 00:14:18.846 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:18.846 00:14:18.846 15:11:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:19.106 [2024-10-01 15:11:28.801052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:24.393 Initializing NVMe Controllers 00:14:24.393 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:24.393 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:24.393 Initialization complete. Launching workers. 00:14:24.393 ======================================================== 00:14:24.393 Latency(us) 00:14:24.393 Device Information : IOPS MiB/s Average min max 00:14:24.393 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39979.40 156.17 3201.64 844.73 10789.49 00:14:24.393 ======================================================== 00:14:24.393 Total : 39979.40 156.17 3201.64 844.73 10789.49 00:14:24.393 00:14:24.393 [2024-10-01 15:11:33.905205] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:24.393 15:11:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:24.393 [2024-10-01 15:11:34.085741] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:29.678 Initializing NVMe Controllers 00:14:29.678 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:29.678 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:29.678 Initialization complete. Launching workers. 00:14:29.678 ======================================================== 00:14:29.678 Latency(us) 00:14:29.678 Device Information : IOPS MiB/s Average min max 00:14:29.678 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35322.95 137.98 3623.37 1105.70 7361.00 00:14:29.678 ======================================================== 00:14:29.678 Total : 35322.95 137.98 3623.37 1105.70 7361.00 00:14:29.678 00:14:29.678 [2024-10-01 15:11:39.106497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:29.678 15:11:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:29.678 [2024-10-01 15:11:39.296636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:34.969 [2024-10-01 15:11:44.435084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:34.969 Initializing NVMe Controllers 00:14:34.969 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:34.969 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:34.969 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:34.969 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:34.969 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:34.969 Initialization complete. Launching workers. 00:14:34.969 Starting thread on core 2 00:14:34.969 Starting thread on core 3 00:14:34.969 Starting thread on core 1 00:14:34.970 15:11:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:34.970 [2024-10-01 15:11:44.697961] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:38.270 [2024-10-01 15:11:47.757361] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:38.270 Initializing NVMe Controllers 00:14:38.270 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:38.270 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:38.270 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:38.270 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:38.270 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:38.270 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:38.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:38.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:38.270 Initialization complete. Launching workers. 00:14:38.270 Starting thread on core 1 with urgent priority queue 00:14:38.270 Starting thread on core 2 with urgent priority queue 00:14:38.270 Starting thread on core 3 with urgent priority queue 00:14:38.270 Starting thread on core 0 with urgent priority queue 00:14:38.270 SPDK bdev Controller (SPDK2 ) core 0: 9310.33 IO/s 10.74 secs/100000 ios 00:14:38.270 SPDK bdev Controller (SPDK2 ) core 1: 8951.67 IO/s 11.17 secs/100000 ios 00:14:38.270 SPDK bdev Controller (SPDK2 ) core 2: 12468.67 IO/s 8.02 secs/100000 ios 00:14:38.270 SPDK bdev Controller (SPDK2 ) core 3: 10527.67 IO/s 9.50 secs/100000 ios 00:14:38.270 ======================================================== 00:14:38.270 00:14:38.270 15:11:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:38.270 [2024-10-01 15:11:48.028472] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:38.270 Initializing NVMe Controllers 00:14:38.270 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:38.270 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:38.270 Namespace ID: 1 size: 0GB 00:14:38.270 Initialization complete. 00:14:38.270 INFO: using host memory buffer for IO 00:14:38.270 Hello world! 00:14:38.270 [2024-10-01 15:11:48.037527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:38.270 15:11:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:38.530 [2024-10-01 15:11:48.299315] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:39.911 Initializing NVMe Controllers 00:14:39.911 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:39.911 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:39.911 Initialization complete. Launching workers. 00:14:39.911 submit (in ns) avg, min, max = 8414.7, 3892.5, 4000216.7 00:14:39.911 complete (in ns) avg, min, max = 18083.2, 2387.5, 5992980.0 00:14:39.911 00:14:39.911 Submit histogram 00:14:39.911 ================ 00:14:39.911 Range in us Cumulative Count 00:14:39.911 3.867 - 3.893: 0.0052% ( 1) 00:14:39.911 3.893 - 3.920: 1.8965% ( 363) 00:14:39.911 3.920 - 3.947: 8.7584% ( 1317) 00:14:39.911 3.947 - 3.973: 19.5644% ( 2074) 00:14:39.911 3.973 - 4.000: 32.6994% ( 2521) 00:14:39.911 4.000 - 4.027: 44.3235% ( 2231) 00:14:39.911 4.027 - 4.053: 56.2340% ( 2286) 00:14:39.911 4.053 - 4.080: 71.1666% ( 2866) 00:14:39.911 4.080 - 4.107: 83.5930% ( 2385) 00:14:39.911 4.107 - 4.133: 93.6904% ( 1938) 00:14:39.911 4.133 - 4.160: 98.0462% ( 836) 00:14:39.911 4.160 - 4.187: 99.2133% ( 224) 00:14:39.911 4.187 - 4.213: 99.5102% ( 57) 00:14:39.911 4.213 - 4.240: 99.5832% ( 14) 00:14:39.911 4.240 - 4.267: 99.5988% ( 3) 00:14:39.911 4.320 - 4.347: 99.6040% ( 1) 00:14:39.911 4.560 - 4.587: 99.6092% ( 1) 00:14:39.911 4.800 - 4.827: 99.6144% ( 1) 00:14:39.911 4.853 - 4.880: 99.6197% ( 1) 00:14:39.911 5.253 - 5.280: 99.6249% ( 1) 00:14:39.911 5.307 - 5.333: 99.6301% ( 1) 00:14:39.911 5.333 - 5.360: 99.6353% ( 1) 00:14:39.911 5.840 - 5.867: 99.6405% ( 1) 00:14:39.911 5.867 - 5.893: 99.6509% ( 2) 00:14:39.911 5.893 - 5.920: 99.6561% ( 1) 00:14:39.911 6.027 - 6.053: 99.6718% ( 3) 00:14:39.911 6.053 - 6.080: 99.6822% ( 2) 00:14:39.911 6.080 - 6.107: 99.6926% ( 2) 00:14:39.911 6.187 - 6.213: 99.7030% ( 2) 00:14:39.911 6.213 - 6.240: 99.7082% ( 1) 00:14:39.911 6.240 - 6.267: 99.7239% ( 3) 00:14:39.911 6.293 - 6.320: 99.7291% ( 1) 00:14:39.911 6.320 - 6.347: 99.7395% ( 2) 00:14:39.911 6.347 - 6.373: 99.7447% ( 1) 00:14:39.911 6.400 - 6.427: 99.7499% ( 1) 00:14:39.911 6.427 - 6.453: 99.7655% ( 3) 00:14:39.911 6.480 - 6.507: 99.7760% ( 2) 00:14:39.911 6.587 - 6.613: 99.7812% ( 1) 00:14:39.911 6.640 - 6.667: 99.7916% ( 2) 00:14:39.911 6.693 - 6.720: 99.7968% ( 1) 00:14:39.911 6.827 - 6.880: 99.8072% ( 2) 00:14:39.911 6.880 - 6.933: 99.8176% ( 2) 00:14:39.911 6.933 - 6.987: 99.8281% ( 2) 00:14:39.912 6.987 - 7.040: 99.8333% ( 1) 00:14:39.912 7.093 - 7.147: 99.8437% ( 2) 00:14:39.912 7.147 - 7.200: 99.8489% ( 1) 00:14:39.912 7.413 - 7.467: 99.8541% ( 1) 00:14:39.912 7.520 - 7.573: 99.8645% ( 2) 00:14:39.912 7.947 - 8.000: 99.8697% ( 1) 00:14:39.912 8.000 - 8.053: 99.8750% ( 1) 00:14:39.912 10.347 - 10.400: 99.8802% ( 1) 00:14:39.912 11.307 - 11.360: 99.8854% ( 1) 00:14:39.912 13.867 - 13.973: 99.8906% ( 1) 00:14:39.912 3986.773 - 4014.080: 100.0000% ( 21) 00:14:39.912 00:14:39.912 Complete histogram 00:14:39.912 ================== 00:14:39.912 Range in us Cumulative Count 00:14:39.912 2.387 - 2.400: 0.5992% ( 115) 00:14:39.912 2.400 - 2.413: 1.0316% ( 83) 00:14:39.912 2.413 - 2.427: 1.1150% ( 16) 00:14:39.912 2.427 - 2.440: 1.1827% ( 13) 00:14:39.912 2.440 - 2.453: 1.2140% ( 6) 00:14:39.912 2.453 - 2.467: 44.3495% ( 8279) 00:14:39.912 2.467 - 2.480: 58.6881% ( 2752) 00:14:39.912 2.480 - 2.493: 71.6668% ( 2491) 00:14:39.912 2.493 - 2.507: 78.7370% ( 1357) 00:14:39.912 2.507 - 2.520: 80.9410% ( 423) 00:14:39.912 2.520 - 2.533: 83.6763% ( 525) 00:14:39.912 2.533 - 2.547: 89.5066% ( 1119) 00:14:39.912 2.547 - 2.560: 94.8054% ( 1017) 00:14:39.912 2.560 - 2.573: 97.4261% ( 503) 00:14:39.912 2.573 - 2.587: 98.7183% ( 248) 00:14:39.912 2.587 - 2.600: 99.1924% ( 91) 00:14:39.912 2.600 - 2.613: 99.3175% ( 24) 00:14:39.912 2.613 - 2.627: 99.3696% ( 10) 00:14:39.912 2.627 - 2.640: 99.3748% ( 1) 00:14:39.912 2.640 - 2.653: 99.3800% ( 1) 00:14:39.912 2.893 - 2.907: 99.3852% ( 1) 00:14:39.912 4.213 - 4.240: 99.3904% ( 1) 00:14:39.912 4.293 - 4.320: 99.4008% ( 2) 00:14:39.912 4.373 - [2024-10-01 15:11:49.393660] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:39.912 4.400: 99.4060% ( 1) 00:14:39.912 4.400 - 4.427: 99.4112% ( 1) 00:14:39.912 4.453 - 4.480: 99.4165% ( 1) 00:14:39.912 4.507 - 4.533: 99.4217% ( 1) 00:14:39.912 4.560 - 4.587: 99.4269% ( 1) 00:14:39.912 4.613 - 4.640: 99.4425% ( 3) 00:14:39.912 4.640 - 4.667: 99.4477% ( 1) 00:14:39.912 4.667 - 4.693: 99.4529% ( 1) 00:14:39.912 4.747 - 4.773: 99.4581% ( 1) 00:14:39.912 4.773 - 4.800: 99.4633% ( 1) 00:14:39.912 4.800 - 4.827: 99.4738% ( 2) 00:14:39.912 4.880 - 4.907: 99.4894% ( 3) 00:14:39.912 4.933 - 4.960: 99.4946% ( 1) 00:14:39.912 4.987 - 5.013: 99.4998% ( 1) 00:14:39.912 5.013 - 5.040: 99.5050% ( 1) 00:14:39.912 5.067 - 5.093: 99.5154% ( 2) 00:14:39.912 5.093 - 5.120: 99.5207% ( 1) 00:14:39.912 5.120 - 5.147: 99.5259% ( 1) 00:14:39.912 5.200 - 5.227: 99.5311% ( 1) 00:14:39.912 5.333 - 5.360: 99.5363% ( 1) 00:14:39.912 5.413 - 5.440: 99.5415% ( 1) 00:14:39.912 5.467 - 5.493: 99.5467% ( 1) 00:14:39.912 5.520 - 5.547: 99.5519% ( 1) 00:14:39.912 5.600 - 5.627: 99.5571% ( 1) 00:14:39.912 5.627 - 5.653: 99.5623% ( 1) 00:14:39.912 5.680 - 5.707: 99.5676% ( 1) 00:14:39.912 5.840 - 5.867: 99.5728% ( 1) 00:14:39.912 6.080 - 6.107: 99.5780% ( 1) 00:14:39.912 6.747 - 6.773: 99.5832% ( 1) 00:14:39.912 10.293 - 10.347: 99.5884% ( 1) 00:14:39.912 11.360 - 11.413: 99.5936% ( 1) 00:14:39.912 11.627 - 11.680: 99.5988% ( 1) 00:14:39.912 12.587 - 12.640: 99.6040% ( 1) 00:14:39.912 44.160 - 44.373: 99.6092% ( 1) 00:14:39.912 1617.920 - 1624.747: 99.6144% ( 1) 00:14:39.912 3986.773 - 4014.080: 99.9948% ( 73) 00:14:39.912 5980.160 - 6007.467: 100.0000% ( 1) 00:14:39.912 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:39.912 [ 00:14:39.912 { 00:14:39.912 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:39.912 "subtype": "Discovery", 00:14:39.912 "listen_addresses": [], 00:14:39.912 "allow_any_host": true, 00:14:39.912 "hosts": [] 00:14:39.912 }, 00:14:39.912 { 00:14:39.912 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:39.912 "subtype": "NVMe", 00:14:39.912 "listen_addresses": [ 00:14:39.912 { 00:14:39.912 "trtype": "VFIOUSER", 00:14:39.912 "adrfam": "IPv4", 00:14:39.912 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:39.912 "trsvcid": "0" 00:14:39.912 } 00:14:39.912 ], 00:14:39.912 "allow_any_host": true, 00:14:39.912 "hosts": [], 00:14:39.912 "serial_number": "SPDK1", 00:14:39.912 "model_number": "SPDK bdev Controller", 00:14:39.912 "max_namespaces": 32, 00:14:39.912 "min_cntlid": 1, 00:14:39.912 "max_cntlid": 65519, 00:14:39.912 "namespaces": [ 00:14:39.912 { 00:14:39.912 "nsid": 1, 00:14:39.912 "bdev_name": "Malloc1", 00:14:39.912 "name": "Malloc1", 00:14:39.912 "nguid": "7FFD7C583E874DA4950364F8FCDB62FF", 00:14:39.912 "uuid": "7ffd7c58-3e87-4da4-9503-64f8fcdb62ff" 00:14:39.912 }, 00:14:39.912 { 00:14:39.912 "nsid": 2, 00:14:39.912 "bdev_name": "Malloc3", 00:14:39.912 "name": "Malloc3", 00:14:39.912 "nguid": "CDB037E0EA9841A787E45A7718F5F901", 00:14:39.912 "uuid": "cdb037e0-ea98-41a7-87e4-5a7718f5f901" 00:14:39.912 } 00:14:39.912 ] 00:14:39.912 }, 00:14:39.912 { 00:14:39.912 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:39.912 "subtype": "NVMe", 00:14:39.912 "listen_addresses": [ 00:14:39.912 { 00:14:39.912 "trtype": "VFIOUSER", 00:14:39.912 "adrfam": "IPv4", 00:14:39.912 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:39.912 "trsvcid": "0" 00:14:39.912 } 00:14:39.912 ], 00:14:39.912 "allow_any_host": true, 00:14:39.912 "hosts": [], 00:14:39.912 "serial_number": "SPDK2", 00:14:39.912 "model_number": "SPDK bdev Controller", 00:14:39.912 "max_namespaces": 32, 00:14:39.912 "min_cntlid": 1, 00:14:39.912 "max_cntlid": 65519, 00:14:39.912 "namespaces": [ 00:14:39.912 { 00:14:39.912 "nsid": 1, 00:14:39.912 "bdev_name": "Malloc2", 00:14:39.912 "name": "Malloc2", 00:14:39.912 "nguid": "259BBCB932FB407DBC3842082D0FAAB1", 00:14:39.912 "uuid": "259bbcb9-32fb-407d-bc38-42082d0faab1" 00:14:39.912 } 00:14:39.912 ] 00:14:39.912 } 00:14:39.912 ] 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3921033 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:39.912 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:40.174 Malloc4 00:14:40.174 [2024-10-01 15:11:49.803480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:40.174 15:11:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:40.174 [2024-10-01 15:11:49.996804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:40.174 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:40.435 Asynchronous Event Request test 00:14:40.435 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:40.435 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:40.435 Registering asynchronous event callbacks... 00:14:40.435 Starting namespace attribute notice tests for all controllers... 00:14:40.435 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:40.435 aer_cb - Changed Namespace 00:14:40.435 Cleaning up... 00:14:40.435 [ 00:14:40.435 { 00:14:40.435 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:40.435 "subtype": "Discovery", 00:14:40.435 "listen_addresses": [], 00:14:40.435 "allow_any_host": true, 00:14:40.435 "hosts": [] 00:14:40.435 }, 00:14:40.435 { 00:14:40.435 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:40.435 "subtype": "NVMe", 00:14:40.435 "listen_addresses": [ 00:14:40.435 { 00:14:40.435 "trtype": "VFIOUSER", 00:14:40.435 "adrfam": "IPv4", 00:14:40.435 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:40.435 "trsvcid": "0" 00:14:40.435 } 00:14:40.435 ], 00:14:40.435 "allow_any_host": true, 00:14:40.435 "hosts": [], 00:14:40.435 "serial_number": "SPDK1", 00:14:40.435 "model_number": "SPDK bdev Controller", 00:14:40.435 "max_namespaces": 32, 00:14:40.435 "min_cntlid": 1, 00:14:40.435 "max_cntlid": 65519, 00:14:40.435 "namespaces": [ 00:14:40.435 { 00:14:40.435 "nsid": 1, 00:14:40.435 "bdev_name": "Malloc1", 00:14:40.435 "name": "Malloc1", 00:14:40.435 "nguid": "7FFD7C583E874DA4950364F8FCDB62FF", 00:14:40.435 "uuid": "7ffd7c58-3e87-4da4-9503-64f8fcdb62ff" 00:14:40.435 }, 00:14:40.435 { 00:14:40.435 "nsid": 2, 00:14:40.435 "bdev_name": "Malloc3", 00:14:40.435 "name": "Malloc3", 00:14:40.435 "nguid": "CDB037E0EA9841A787E45A7718F5F901", 00:14:40.435 "uuid": "cdb037e0-ea98-41a7-87e4-5a7718f5f901" 00:14:40.435 } 00:14:40.435 ] 00:14:40.435 }, 00:14:40.435 { 00:14:40.435 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:40.435 "subtype": "NVMe", 00:14:40.435 "listen_addresses": [ 00:14:40.435 { 00:14:40.435 "trtype": "VFIOUSER", 00:14:40.435 "adrfam": "IPv4", 00:14:40.435 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:40.435 "trsvcid": "0" 00:14:40.435 } 00:14:40.435 ], 00:14:40.435 "allow_any_host": true, 00:14:40.435 "hosts": [], 00:14:40.435 "serial_number": "SPDK2", 00:14:40.435 "model_number": "SPDK bdev Controller", 00:14:40.435 "max_namespaces": 32, 00:14:40.435 "min_cntlid": 1, 00:14:40.435 "max_cntlid": 65519, 00:14:40.435 "namespaces": [ 00:14:40.435 { 00:14:40.435 "nsid": 1, 00:14:40.435 "bdev_name": "Malloc2", 00:14:40.435 "name": "Malloc2", 00:14:40.435 "nguid": "259BBCB932FB407DBC3842082D0FAAB1", 00:14:40.435 "uuid": "259bbcb9-32fb-407d-bc38-42082d0faab1" 00:14:40.435 }, 00:14:40.435 { 00:14:40.435 "nsid": 2, 00:14:40.435 "bdev_name": "Malloc4", 00:14:40.435 "name": "Malloc4", 00:14:40.435 "nguid": "F9C4A288FC0E46D48E5BCFC34FF5D8A1", 00:14:40.435 "uuid": "f9c4a288-fc0e-46d4-8e5b-cfc34ff5d8a1" 00:14:40.435 } 00:14:40.435 ] 00:14:40.435 } 00:14:40.435 ] 00:14:40.435 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3921033 00:14:40.436 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:40.436 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3912011 00:14:40.436 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3912011 ']' 00:14:40.436 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3912011 00:14:40.436 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:14:40.436 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.436 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3912011 00:14:40.436 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:40.436 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:40.436 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3912011' 00:14:40.436 killing process with pid 3912011 00:14:40.436 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3912011 00:14:40.436 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3912011 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3921126 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3921126' 00:14:40.696 Process pid: 3921126 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3921126 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3921126 ']' 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.696 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:40.697 15:11:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:40.697 [2024-10-01 15:11:50.518865] thread.c:2945:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:40.697 [2024-10-01 15:11:50.519804] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:14:40.697 [2024-10-01 15:11:50.519852] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.957 [2024-10-01 15:11:50.581418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.957 [2024-10-01 15:11:50.645960] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.957 [2024-10-01 15:11:50.646007] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.957 [2024-10-01 15:11:50.646016] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.957 [2024-10-01 15:11:50.646023] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.957 [2024-10-01 15:11:50.646029] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.957 [2024-10-01 15:11:50.646116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.957 [2024-10-01 15:11:50.646250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.957 [2024-10-01 15:11:50.646404] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.957 [2024-10-01 15:11:50.646405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:40.957 [2024-10-01 15:11:50.715492] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:40.957 [2024-10-01 15:11:50.715602] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:40.957 [2024-10-01 15:11:50.716553] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:40.957 [2024-10-01 15:11:50.717232] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:40.957 [2024-10-01 15:11:50.717298] thread.c:2096:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:41.526 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.526 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:41.526 15:11:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:42.465 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:42.726 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:42.726 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:42.726 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:42.726 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:42.726 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:42.986 Malloc1 00:14:42.986 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:43.247 15:11:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:43.508 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:43.508 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:43.508 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:43.508 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:43.767 Malloc2 00:14:43.767 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:44.027 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:44.027 15:11:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:44.288 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:44.288 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3921126 00:14:44.288 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3921126 ']' 00:14:44.288 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3921126 00:14:44.288 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:14:44.288 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:44.288 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3921126 00:14:44.288 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:44.288 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:44.288 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3921126' 00:14:44.288 killing process with pid 3921126 00:14:44.288 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3921126 00:14:44.288 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3921126 00:14:44.549 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:44.549 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:44.549 00:14:44.549 real 0m51.220s 00:14:44.549 user 3m16.075s 00:14:44.549 sys 0m2.796s 00:14:44.549 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:44.549 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:44.549 ************************************ 00:14:44.549 END TEST nvmf_vfio_user 00:14:44.549 ************************************ 00:14:44.549 15:11:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:44.549 15:11:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:44.549 15:11:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:44.549 15:11:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:44.549 ************************************ 00:14:44.549 START TEST nvmf_vfio_user_nvme_compliance 00:14:44.549 ************************************ 00:14:44.549 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:44.811 * Looking for test storage... 00:14:44.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:44.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.811 --rc genhtml_branch_coverage=1 00:14:44.811 --rc genhtml_function_coverage=1 00:14:44.811 --rc genhtml_legend=1 00:14:44.811 --rc geninfo_all_blocks=1 00:14:44.811 --rc geninfo_unexecuted_blocks=1 00:14:44.811 00:14:44.811 ' 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:44.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.811 --rc genhtml_branch_coverage=1 00:14:44.811 --rc genhtml_function_coverage=1 00:14:44.811 --rc genhtml_legend=1 00:14:44.811 --rc geninfo_all_blocks=1 00:14:44.811 --rc geninfo_unexecuted_blocks=1 00:14:44.811 00:14:44.811 ' 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:44.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.811 --rc genhtml_branch_coverage=1 00:14:44.811 --rc genhtml_function_coverage=1 00:14:44.811 --rc genhtml_legend=1 00:14:44.811 --rc geninfo_all_blocks=1 00:14:44.811 --rc geninfo_unexecuted_blocks=1 00:14:44.811 00:14:44.811 ' 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:44.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.811 --rc genhtml_branch_coverage=1 00:14:44.811 --rc genhtml_function_coverage=1 00:14:44.811 --rc genhtml_legend=1 00:14:44.811 --rc geninfo_all_blocks=1 00:14:44.811 --rc geninfo_unexecuted_blocks=1 00:14:44.811 00:14:44.811 ' 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:44.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:44.811 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3922054 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3922054' 00:14:44.812 Process pid: 3922054 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3922054 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3922054 ']' 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.812 15:11:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:45.072 [2024-10-01 15:11:54.675663] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:14:45.072 [2024-10-01 15:11:54.675737] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.072 [2024-10-01 15:11:54.743327] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:45.072 [2024-10-01 15:11:54.818761] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.072 [2024-10-01 15:11:54.818801] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.072 [2024-10-01 15:11:54.818809] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.072 [2024-10-01 15:11:54.818816] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.072 [2024-10-01 15:11:54.818822] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.072 [2024-10-01 15:11:54.818963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.072 [2024-10-01 15:11:54.819090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.072 [2024-10-01 15:11:54.819094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.649 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.649 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:14:45.649 15:11:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:47.032 malloc0 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.032 15:11:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:47.032 00:14:47.032 00:14:47.032 CUnit - A unit testing framework for C - Version 2.1-3 00:14:47.032 http://cunit.sourceforge.net/ 00:14:47.032 00:14:47.032 00:14:47.032 Suite: nvme_compliance 00:14:47.032 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-01 15:11:56.741439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:47.032 [2024-10-01 15:11:56.742797] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:47.032 [2024-10-01 15:11:56.742809] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:47.032 [2024-10-01 15:11:56.742814] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:47.032 [2024-10-01 15:11:56.744454] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:47.032 passed 00:14:47.032 Test: admin_identify_ctrlr_verify_fused ...[2024-10-01 15:11:56.840056] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:47.032 [2024-10-01 15:11:56.843075] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:47.032 passed 00:14:47.293 Test: admin_identify_ns ...[2024-10-01 15:11:56.941304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:47.293 [2024-10-01 15:11:57.001006] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:47.293 [2024-10-01 15:11:57.009010] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:47.293 [2024-10-01 15:11:57.030115] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:47.293 passed 00:14:47.293 Test: admin_get_features_mandatory_features ...[2024-10-01 15:11:57.121712] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:47.293 [2024-10-01 15:11:57.124727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:47.552 passed 00:14:47.553 Test: admin_get_features_optional_features ...[2024-10-01 15:11:57.219286] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:47.553 [2024-10-01 15:11:57.222307] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:47.553 passed 00:14:47.553 Test: admin_set_features_number_of_queues ...[2024-10-01 15:11:57.314405] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:47.813 [2024-10-01 15:11:57.422106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:47.813 passed 00:14:47.813 Test: admin_get_log_page_mandatory_logs ...[2024-10-01 15:11:57.514755] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:47.813 [2024-10-01 15:11:57.517769] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:47.813 passed 00:14:47.813 Test: admin_get_log_page_with_lpo ...[2024-10-01 15:11:57.609262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:48.073 [2024-10-01 15:11:57.681009] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:48.073 [2024-10-01 15:11:57.694052] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:48.073 passed 00:14:48.073 Test: fabric_property_get ...[2024-10-01 15:11:57.785677] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:48.073 [2024-10-01 15:11:57.786917] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:48.073 [2024-10-01 15:11:57.788702] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:48.073 passed 00:14:48.073 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-01 15:11:57.881332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:48.073 [2024-10-01 15:11:57.882588] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:48.073 [2024-10-01 15:11:57.884352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:48.073 passed 00:14:48.333 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-01 15:11:57.978505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:48.333 [2024-10-01 15:11:58.062006] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:48.333 [2024-10-01 15:11:58.078005] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:48.333 [2024-10-01 15:11:58.083099] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:48.333 passed 00:14:48.333 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-01 15:11:58.175101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:48.333 [2024-10-01 15:11:58.176342] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:48.333 [2024-10-01 15:11:58.178122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:48.593 passed 00:14:48.593 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-01 15:11:58.273262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:48.593 [2024-10-01 15:11:58.349006] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:48.593 [2024-10-01 15:11:58.373005] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:48.593 [2024-10-01 15:11:58.378057] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:48.594 passed 00:14:48.854 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-01 15:11:58.469674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:48.854 [2024-10-01 15:11:58.470920] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:48.854 [2024-10-01 15:11:58.470940] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:48.854 [2024-10-01 15:11:58.472697] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:48.854 passed 00:14:48.854 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-01 15:11:58.565800] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:48.854 [2024-10-01 15:11:58.657005] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:48.854 [2024-10-01 15:11:58.665011] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:48.854 [2024-10-01 15:11:58.673006] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:48.854 [2024-10-01 15:11:58.681008] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:48.854 [2024-10-01 15:11:58.710081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:49.114 passed 00:14:49.114 Test: admin_create_io_sq_verify_pc ...[2024-10-01 15:11:58.803720] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:49.114 [2024-10-01 15:11:58.818010] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:49.114 [2024-10-01 15:11:58.835874] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:49.114 passed 00:14:49.114 Test: admin_create_io_qp_max_qps ...[2024-10-01 15:11:58.931398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:50.504 [2024-10-01 15:12:00.025007] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:50.843 [2024-10-01 15:12:00.406239] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:50.843 passed 00:14:50.844 Test: admin_create_io_sq_shared_cq ...[2024-10-01 15:12:00.499365] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:50.844 [2024-10-01 15:12:00.631008] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:50.844 [2024-10-01 15:12:00.668070] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:51.175 passed 00:14:51.175 00:14:51.175 Run Summary: Type Total Ran Passed Failed Inactive 00:14:51.175 suites 1 1 n/a 0 0 00:14:51.175 tests 18 18 18 0 0 00:14:51.175 asserts 360 360 360 0 n/a 00:14:51.175 00:14:51.175 Elapsed time = 1.645 seconds 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3922054 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3922054 ']' 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3922054 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3922054 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3922054' 00:14:51.175 killing process with pid 3922054 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3922054 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3922054 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:51.175 00:14:51.175 real 0m6.567s 00:14:51.175 user 0m18.527s 00:14:51.175 sys 0m0.544s 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:51.175 ************************************ 00:14:51.175 END TEST nvmf_vfio_user_nvme_compliance 00:14:51.175 ************************************ 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:51.175 15:12:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:51.175 ************************************ 00:14:51.175 START TEST nvmf_vfio_user_fuzz 00:14:51.175 ************************************ 00:14:51.175 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:51.436 * Looking for test storage... 00:14:51.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:51.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.436 --rc genhtml_branch_coverage=1 00:14:51.436 --rc genhtml_function_coverage=1 00:14:51.436 --rc genhtml_legend=1 00:14:51.436 --rc geninfo_all_blocks=1 00:14:51.436 --rc geninfo_unexecuted_blocks=1 00:14:51.436 00:14:51.436 ' 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:51.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.436 --rc genhtml_branch_coverage=1 00:14:51.436 --rc genhtml_function_coverage=1 00:14:51.436 --rc genhtml_legend=1 00:14:51.436 --rc geninfo_all_blocks=1 00:14:51.436 --rc geninfo_unexecuted_blocks=1 00:14:51.436 00:14:51.436 ' 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:51.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.436 --rc genhtml_branch_coverage=1 00:14:51.436 --rc genhtml_function_coverage=1 00:14:51.436 --rc genhtml_legend=1 00:14:51.436 --rc geninfo_all_blocks=1 00:14:51.436 --rc geninfo_unexecuted_blocks=1 00:14:51.436 00:14:51.436 ' 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:51.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.436 --rc genhtml_branch_coverage=1 00:14:51.436 --rc genhtml_function_coverage=1 00:14:51.436 --rc genhtml_legend=1 00:14:51.436 --rc geninfo_all_blocks=1 00:14:51.436 --rc geninfo_unexecuted_blocks=1 00:14:51.436 00:14:51.436 ' 00:14:51.436 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:51.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3923408 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3923408' 00:14:51.437 Process pid: 3923408 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3923408 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3923408 ']' 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:51.437 15:12:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:52.380 15:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:52.380 15:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:14:52.380 15:12:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:53.321 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:53.321 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.321 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:53.321 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.321 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:53.321 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:53.321 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.321 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:53.321 malloc0 00:14:53.321 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.321 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:53.321 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.321 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:53.582 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.582 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:53.582 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.582 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:53.582 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.582 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:53.582 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.582 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:53.582 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.582 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:53.582 15:12:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:25.704 Fuzzing completed. Shutting down the fuzz application 00:15:25.704 00:15:25.704 Dumping successful admin opcodes: 00:15:25.704 8, 9, 10, 24, 00:15:25.704 Dumping successful io opcodes: 00:15:25.704 0, 00:15:25.704 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1160923, total successful commands: 4569, random_seed: 3414817536 00:15:25.704 NS: 0x200003a1ef00 admin qp, Total commands completed: 145843, total successful commands: 1182, random_seed: 2275909376 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3923408 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3923408 ']' 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3923408 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3923408 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3923408' 00:15:25.704 killing process with pid 3923408 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3923408 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3923408 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:25.704 00:15:25.704 real 0m33.870s 00:15:25.704 user 0m40.225s 00:15:25.704 sys 0m23.163s 00:15:25.704 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.705 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:25.705 ************************************ 00:15:25.705 END TEST nvmf_vfio_user_fuzz 00:15:25.705 ************************************ 00:15:25.705 15:12:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:25.705 15:12:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:25.705 15:12:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.705 15:12:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:25.705 ************************************ 00:15:25.705 START TEST nvmf_auth_target 00:15:25.705 ************************************ 00:15:25.705 15:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:25.705 * Looking for test storage... 00:15:25.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:25.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.705 --rc genhtml_branch_coverage=1 00:15:25.705 --rc genhtml_function_coverage=1 00:15:25.705 --rc genhtml_legend=1 00:15:25.705 --rc geninfo_all_blocks=1 00:15:25.705 --rc geninfo_unexecuted_blocks=1 00:15:25.705 00:15:25.705 ' 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:25.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.705 --rc genhtml_branch_coverage=1 00:15:25.705 --rc genhtml_function_coverage=1 00:15:25.705 --rc genhtml_legend=1 00:15:25.705 --rc geninfo_all_blocks=1 00:15:25.705 --rc geninfo_unexecuted_blocks=1 00:15:25.705 00:15:25.705 ' 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:25.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.705 --rc genhtml_branch_coverage=1 00:15:25.705 --rc genhtml_function_coverage=1 00:15:25.705 --rc genhtml_legend=1 00:15:25.705 --rc geninfo_all_blocks=1 00:15:25.705 --rc geninfo_unexecuted_blocks=1 00:15:25.705 00:15:25.705 ' 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:25.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.705 --rc genhtml_branch_coverage=1 00:15:25.705 --rc genhtml_function_coverage=1 00:15:25.705 --rc genhtml_legend=1 00:15:25.705 --rc geninfo_all_blocks=1 00:15:25.705 --rc geninfo_unexecuted_blocks=1 00:15:25.705 00:15:25.705 ' 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.705 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:25.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:25.706 15:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:32.299 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:32.299 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:32.299 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ up == up ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:32.299 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # is_hw=yes 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:32.299 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.560 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:32.560 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:32.560 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:32.560 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:32.560 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:32.560 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:32.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:15:32.821 00:15:32.821 --- 10.0.0.2 ping statistics --- 00:15:32.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.821 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:32.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:15:32.821 00:15:32.821 --- 10.0.0.1 ping statistics --- 00:15:32.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.821 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # return 0 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3934177 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3934177 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3934177 ']' 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.821 15:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3934369 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=80f171205878972db4abdd96d23074f5a0cd505c81514eb3 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.1rI 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 80f171205878972db4abdd96d23074f5a0cd505c81514eb3 0 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 80f171205878972db4abdd96d23074f5a0cd505c81514eb3 0 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=80f171205878972db4abdd96d23074f5a0cd505c81514eb3 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.1rI 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.1rI 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.1rI 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=15daec31eb5819073e16aef0df352342b1d28b12d5f63c1543bb6d122ba23bd5 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.ZMY 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 15daec31eb5819073e16aef0df352342b1d28b12d5f63c1543bb6d122ba23bd5 3 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 15daec31eb5819073e16aef0df352342b1d28b12d5f63c1543bb6d122ba23bd5 3 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=15daec31eb5819073e16aef0df352342b1d28b12d5f63c1543bb6d122ba23bd5 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.ZMY 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.ZMY 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ZMY 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=9178ded3c0a2606ee0777afe41fa13fc 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.uEu 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 9178ded3c0a2606ee0777afe41fa13fc 1 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 9178ded3c0a2606ee0777afe41fa13fc 1 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=9178ded3c0a2606ee0777afe41fa13fc 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.uEu 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.uEu 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.uEu 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:33.762 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=0d863656e2ef834ada44f75cc6838d5be691553b7be02a10 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.hEX 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 0d863656e2ef834ada44f75cc6838d5be691553b7be02a10 2 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 0d863656e2ef834ada44f75cc6838d5be691553b7be02a10 2 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=0d863656e2ef834ada44f75cc6838d5be691553b7be02a10 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.hEX 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.hEX 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.hEX 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=aea5801d31453a775aae0b6167a1e32d8c4dcc71a706d587 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.oUj 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key aea5801d31453a775aae0b6167a1e32d8c4dcc71a706d587 2 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 aea5801d31453a775aae0b6167a1e32d8c4dcc71a706d587 2 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=aea5801d31453a775aae0b6167a1e32d8c4dcc71a706d587 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.oUj 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.oUj 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.oUj 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=08646dba678458179e0620003eab814f 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.5n0 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 08646dba678458179e0620003eab814f 1 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 08646dba678458179e0620003eab814f 1 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=08646dba678458179e0620003eab814f 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.5n0 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.5n0 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.5n0 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=22c3b6009446b9fd40c5ddb04b6e17361dd79b37f78b20f24076c868c8894124 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.gDG 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 22c3b6009446b9fd40c5ddb04b6e17361dd79b37f78b20f24076c868c8894124 3 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 22c3b6009446b9fd40c5ddb04b6e17361dd79b37f78b20f24076c868c8894124 3 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=22c3b6009446b9fd40c5ddb04b6e17361dd79b37f78b20f24076c868c8894124 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.gDG 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.gDG 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.gDG 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3934177 00:15:34.025 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3934177 ']' 00:15:34.026 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.026 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.026 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.026 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.026 15:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.286 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.286 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:34.286 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3934369 /var/tmp/host.sock 00:15:34.286 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3934369 ']' 00:15:34.286 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:34.286 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.286 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:34.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:34.286 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.286 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1rI 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1rI 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1rI 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ZMY ]] 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZMY 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.548 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.809 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.809 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZMY 00:15:34.809 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZMY 00:15:34.809 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:34.809 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uEu 00:15:34.809 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.809 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.809 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.809 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.uEu 00:15:34.809 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.uEu 00:15:35.070 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.hEX ]] 00:15:35.070 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hEX 00:15:35.070 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.070 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.070 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.070 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hEX 00:15:35.070 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hEX 00:15:35.331 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:35.331 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oUj 00:15:35.331 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.331 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.331 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.331 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.oUj 00:15:35.331 15:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.oUj 00:15:35.331 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.5n0 ]] 00:15:35.331 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5n0 00:15:35.331 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.331 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.331 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.331 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5n0 00:15:35.331 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5n0 00:15:35.592 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:35.592 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gDG 00:15:35.592 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.592 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.592 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.592 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gDG 00:15:35.592 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gDG 00:15:35.853 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:35.853 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:35.853 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.853 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.853 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.854 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.114 00:15:36.114 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.114 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.114 15:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.374 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.374 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.374 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.374 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.374 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.374 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.374 { 00:15:36.374 "cntlid": 1, 00:15:36.374 "qid": 0, 00:15:36.374 "state": "enabled", 00:15:36.374 "thread": "nvmf_tgt_poll_group_000", 00:15:36.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:36.375 "listen_address": { 00:15:36.375 "trtype": "TCP", 00:15:36.375 "adrfam": "IPv4", 00:15:36.375 "traddr": "10.0.0.2", 00:15:36.375 "trsvcid": "4420" 00:15:36.375 }, 00:15:36.375 "peer_address": { 00:15:36.375 "trtype": "TCP", 00:15:36.375 "adrfam": "IPv4", 00:15:36.375 "traddr": "10.0.0.1", 00:15:36.375 "trsvcid": "53156" 00:15:36.375 }, 00:15:36.375 "auth": { 00:15:36.375 "state": "completed", 00:15:36.375 "digest": "sha256", 00:15:36.375 "dhgroup": "null" 00:15:36.375 } 00:15:36.375 } 00:15:36.375 ]' 00:15:36.375 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.375 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.375 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.375 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:36.375 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.635 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.635 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.635 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.635 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:15:36.635 15:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.576 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.837 00:15:37.837 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.837 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.837 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.099 { 00:15:38.099 "cntlid": 3, 00:15:38.099 "qid": 0, 00:15:38.099 "state": "enabled", 00:15:38.099 "thread": "nvmf_tgt_poll_group_000", 00:15:38.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:38.099 "listen_address": { 00:15:38.099 "trtype": "TCP", 00:15:38.099 "adrfam": "IPv4", 00:15:38.099 "traddr": "10.0.0.2", 00:15:38.099 "trsvcid": "4420" 00:15:38.099 }, 00:15:38.099 "peer_address": { 00:15:38.099 "trtype": "TCP", 00:15:38.099 "adrfam": "IPv4", 00:15:38.099 "traddr": "10.0.0.1", 00:15:38.099 "trsvcid": "53188" 00:15:38.099 }, 00:15:38.099 "auth": { 00:15:38.099 "state": "completed", 00:15:38.099 "digest": "sha256", 00:15:38.099 "dhgroup": "null" 00:15:38.099 } 00:15:38.099 } 00:15:38.099 ]' 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.099 15:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.360 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:15:38.360 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:15:39.303 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.303 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:39.303 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.303 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.303 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.303 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.303 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:39.303 15:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:39.303 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:39.303 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.303 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.303 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:39.303 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:39.303 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.303 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.303 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.303 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.303 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.303 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.303 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.303 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.563 00:15:39.563 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.563 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.563 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.823 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.823 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.823 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.823 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.823 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.823 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.823 { 00:15:39.823 "cntlid": 5, 00:15:39.823 "qid": 0, 00:15:39.823 "state": "enabled", 00:15:39.823 "thread": "nvmf_tgt_poll_group_000", 00:15:39.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:39.823 "listen_address": { 00:15:39.823 "trtype": "TCP", 00:15:39.823 "adrfam": "IPv4", 00:15:39.823 "traddr": "10.0.0.2", 00:15:39.823 "trsvcid": "4420" 00:15:39.823 }, 00:15:39.823 "peer_address": { 00:15:39.823 "trtype": "TCP", 00:15:39.823 "adrfam": "IPv4", 00:15:39.823 "traddr": "10.0.0.1", 00:15:39.823 "trsvcid": "41186" 00:15:39.823 }, 00:15:39.823 "auth": { 00:15:39.823 "state": "completed", 00:15:39.823 "digest": "sha256", 00:15:39.823 "dhgroup": "null" 00:15:39.823 } 00:15:39.823 } 00:15:39.823 ]' 00:15:39.823 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.823 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.823 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.823 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:39.823 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.824 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.824 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.824 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.085 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:15:40.085 15:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:15:41.029 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.029 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:41.029 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.030 15:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.291 00:15:41.291 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.291 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.291 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.552 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.552 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.552 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.553 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.553 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.553 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.553 { 00:15:41.553 "cntlid": 7, 00:15:41.553 "qid": 0, 00:15:41.553 "state": "enabled", 00:15:41.553 "thread": "nvmf_tgt_poll_group_000", 00:15:41.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:41.553 "listen_address": { 00:15:41.553 "trtype": "TCP", 00:15:41.553 "adrfam": "IPv4", 00:15:41.553 "traddr": "10.0.0.2", 00:15:41.553 "trsvcid": "4420" 00:15:41.553 }, 00:15:41.553 "peer_address": { 00:15:41.553 "trtype": "TCP", 00:15:41.553 "adrfam": "IPv4", 00:15:41.553 "traddr": "10.0.0.1", 00:15:41.553 "trsvcid": "41224" 00:15:41.553 }, 00:15:41.553 "auth": { 00:15:41.553 "state": "completed", 00:15:41.553 "digest": "sha256", 00:15:41.553 "dhgroup": "null" 00:15:41.553 } 00:15:41.553 } 00:15:41.553 ]' 00:15:41.553 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.553 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.553 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.553 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:41.553 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.553 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.553 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.553 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.813 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:15:41.813 15:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.755 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.015 00:15:43.015 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.015 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.015 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.015 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.015 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.015 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.015 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.276 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.276 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.276 { 00:15:43.276 "cntlid": 9, 00:15:43.276 "qid": 0, 00:15:43.276 "state": "enabled", 00:15:43.276 "thread": "nvmf_tgt_poll_group_000", 00:15:43.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:43.276 "listen_address": { 00:15:43.276 "trtype": "TCP", 00:15:43.276 "adrfam": "IPv4", 00:15:43.276 "traddr": "10.0.0.2", 00:15:43.276 "trsvcid": "4420" 00:15:43.276 }, 00:15:43.276 "peer_address": { 00:15:43.276 "trtype": "TCP", 00:15:43.276 "adrfam": "IPv4", 00:15:43.276 "traddr": "10.0.0.1", 00:15:43.276 "trsvcid": "41248" 00:15:43.276 }, 00:15:43.276 "auth": { 00:15:43.276 "state": "completed", 00:15:43.276 "digest": "sha256", 00:15:43.276 "dhgroup": "ffdhe2048" 00:15:43.276 } 00:15:43.276 } 00:15:43.276 ]' 00:15:43.276 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.276 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.276 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.276 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.276 15:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.276 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.276 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.276 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.536 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:15:43.536 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:15:44.105 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.367 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:44.367 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.367 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.367 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.367 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.367 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:44.367 15:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:44.367 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:44.367 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.367 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:44.367 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:44.367 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.367 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.367 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.367 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.367 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.367 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.367 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.367 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.367 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.626 00:15:44.626 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.626 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.626 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.887 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.887 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.887 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.887 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.888 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.888 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.888 { 00:15:44.888 "cntlid": 11, 00:15:44.888 "qid": 0, 00:15:44.888 "state": "enabled", 00:15:44.888 "thread": "nvmf_tgt_poll_group_000", 00:15:44.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:44.888 "listen_address": { 00:15:44.888 "trtype": "TCP", 00:15:44.888 "adrfam": "IPv4", 00:15:44.888 "traddr": "10.0.0.2", 00:15:44.888 "trsvcid": "4420" 00:15:44.888 }, 00:15:44.888 "peer_address": { 00:15:44.888 "trtype": "TCP", 00:15:44.888 "adrfam": "IPv4", 00:15:44.888 "traddr": "10.0.0.1", 00:15:44.888 "trsvcid": "41272" 00:15:44.888 }, 00:15:44.888 "auth": { 00:15:44.888 "state": "completed", 00:15:44.888 "digest": "sha256", 00:15:44.888 "dhgroup": "ffdhe2048" 00:15:44.888 } 00:15:44.888 } 00:15:44.888 ]' 00:15:44.888 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.888 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.888 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.888 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:44.888 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.888 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.888 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.888 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.147 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:15:45.147 15:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.088 15:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.348 00:15:46.348 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.348 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.348 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.608 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.608 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.608 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.608 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.608 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.608 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.608 { 00:15:46.608 "cntlid": 13, 00:15:46.608 "qid": 0, 00:15:46.608 "state": "enabled", 00:15:46.608 "thread": "nvmf_tgt_poll_group_000", 00:15:46.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:46.608 "listen_address": { 00:15:46.608 "trtype": "TCP", 00:15:46.608 "adrfam": "IPv4", 00:15:46.608 "traddr": "10.0.0.2", 00:15:46.608 "trsvcid": "4420" 00:15:46.608 }, 00:15:46.608 "peer_address": { 00:15:46.608 "trtype": "TCP", 00:15:46.608 "adrfam": "IPv4", 00:15:46.608 "traddr": "10.0.0.1", 00:15:46.608 "trsvcid": "41292" 00:15:46.608 }, 00:15:46.608 "auth": { 00:15:46.608 "state": "completed", 00:15:46.608 "digest": "sha256", 00:15:46.608 "dhgroup": "ffdhe2048" 00:15:46.608 } 00:15:46.608 } 00:15:46.608 ]' 00:15:46.608 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.608 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.608 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.608 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:46.608 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.608 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.608 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.609 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.869 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:15:46.869 15:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.807 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.067 00:15:48.067 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.067 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.067 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.328 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.328 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.328 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.328 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.328 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.328 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.328 { 00:15:48.328 "cntlid": 15, 00:15:48.328 "qid": 0, 00:15:48.328 "state": "enabled", 00:15:48.328 "thread": "nvmf_tgt_poll_group_000", 00:15:48.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:48.328 "listen_address": { 00:15:48.328 "trtype": "TCP", 00:15:48.328 "adrfam": "IPv4", 00:15:48.328 "traddr": "10.0.0.2", 00:15:48.328 "trsvcid": "4420" 00:15:48.328 }, 00:15:48.328 "peer_address": { 00:15:48.328 "trtype": "TCP", 00:15:48.328 "adrfam": "IPv4", 00:15:48.328 "traddr": "10.0.0.1", 00:15:48.328 "trsvcid": "41328" 00:15:48.328 }, 00:15:48.328 "auth": { 00:15:48.328 "state": "completed", 00:15:48.328 "digest": "sha256", 00:15:48.328 "dhgroup": "ffdhe2048" 00:15:48.328 } 00:15:48.328 } 00:15:48.328 ]' 00:15:48.328 15:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.328 15:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.328 15:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.329 15:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:48.329 15:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.329 15:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.329 15:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.329 15:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.588 15:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:15:48.588 15:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.528 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.788 00:15:49.788 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.788 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.788 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.048 { 00:15:50.048 "cntlid": 17, 00:15:50.048 "qid": 0, 00:15:50.048 "state": "enabled", 00:15:50.048 "thread": "nvmf_tgt_poll_group_000", 00:15:50.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:50.048 "listen_address": { 00:15:50.048 "trtype": "TCP", 00:15:50.048 "adrfam": "IPv4", 00:15:50.048 "traddr": "10.0.0.2", 00:15:50.048 "trsvcid": "4420" 00:15:50.048 }, 00:15:50.048 "peer_address": { 00:15:50.048 "trtype": "TCP", 00:15:50.048 "adrfam": "IPv4", 00:15:50.048 "traddr": "10.0.0.1", 00:15:50.048 "trsvcid": "33000" 00:15:50.048 }, 00:15:50.048 "auth": { 00:15:50.048 "state": "completed", 00:15:50.048 "digest": "sha256", 00:15:50.048 "dhgroup": "ffdhe3072" 00:15:50.048 } 00:15:50.048 } 00:15:50.048 ]' 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.048 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.309 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:15:50.309 15:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.250 15:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.510 00:15:51.510 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.510 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.510 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.770 { 00:15:51.770 "cntlid": 19, 00:15:51.770 "qid": 0, 00:15:51.770 "state": "enabled", 00:15:51.770 "thread": "nvmf_tgt_poll_group_000", 00:15:51.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:51.770 "listen_address": { 00:15:51.770 "trtype": "TCP", 00:15:51.770 "adrfam": "IPv4", 00:15:51.770 "traddr": "10.0.0.2", 00:15:51.770 "trsvcid": "4420" 00:15:51.770 }, 00:15:51.770 "peer_address": { 00:15:51.770 "trtype": "TCP", 00:15:51.770 "adrfam": "IPv4", 00:15:51.770 "traddr": "10.0.0.1", 00:15:51.770 "trsvcid": "33024" 00:15:51.770 }, 00:15:51.770 "auth": { 00:15:51.770 "state": "completed", 00:15:51.770 "digest": "sha256", 00:15:51.770 "dhgroup": "ffdhe3072" 00:15:51.770 } 00:15:51.770 } 00:15:51.770 ]' 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.770 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.030 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:15:52.030 15:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.969 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.970 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.970 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.970 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.970 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.970 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.970 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.970 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.229 00:15:53.230 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.230 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.230 15:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.489 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.489 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.489 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.489 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.489 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.489 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.489 { 00:15:53.489 "cntlid": 21, 00:15:53.489 "qid": 0, 00:15:53.489 "state": "enabled", 00:15:53.489 "thread": "nvmf_tgt_poll_group_000", 00:15:53.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:53.489 "listen_address": { 00:15:53.489 "trtype": "TCP", 00:15:53.489 "adrfam": "IPv4", 00:15:53.489 "traddr": "10.0.0.2", 00:15:53.490 "trsvcid": "4420" 00:15:53.490 }, 00:15:53.490 "peer_address": { 00:15:53.490 "trtype": "TCP", 00:15:53.490 "adrfam": "IPv4", 00:15:53.490 "traddr": "10.0.0.1", 00:15:53.490 "trsvcid": "33046" 00:15:53.490 }, 00:15:53.490 "auth": { 00:15:53.490 "state": "completed", 00:15:53.490 "digest": "sha256", 00:15:53.490 "dhgroup": "ffdhe3072" 00:15:53.490 } 00:15:53.490 } 00:15:53.490 ]' 00:15:53.490 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.490 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.490 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.490 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:53.490 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.490 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.490 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.490 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.749 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:15:53.749 15:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.689 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.949 00:15:54.949 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.949 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.949 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.209 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.209 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.209 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.209 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.209 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.209 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.209 { 00:15:55.209 "cntlid": 23, 00:15:55.209 "qid": 0, 00:15:55.209 "state": "enabled", 00:15:55.209 "thread": "nvmf_tgt_poll_group_000", 00:15:55.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:55.209 "listen_address": { 00:15:55.209 "trtype": "TCP", 00:15:55.209 "adrfam": "IPv4", 00:15:55.209 "traddr": "10.0.0.2", 00:15:55.209 "trsvcid": "4420" 00:15:55.209 }, 00:15:55.209 "peer_address": { 00:15:55.209 "trtype": "TCP", 00:15:55.209 "adrfam": "IPv4", 00:15:55.209 "traddr": "10.0.0.1", 00:15:55.209 "trsvcid": "33088" 00:15:55.209 }, 00:15:55.209 "auth": { 00:15:55.209 "state": "completed", 00:15:55.209 "digest": "sha256", 00:15:55.209 "dhgroup": "ffdhe3072" 00:15:55.209 } 00:15:55.209 } 00:15:55.209 ]' 00:15:55.209 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.209 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.209 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.209 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:55.209 15:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.209 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.209 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.209 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.469 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:15:55.469 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:15:56.411 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.411 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:56.411 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.411 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.411 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.411 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.411 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.411 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.411 15:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:56.411 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:56.411 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.411 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:56.411 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:56.411 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:56.411 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.411 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.411 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.411 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.411 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.411 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.411 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.411 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.671 00:15:56.671 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.671 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.671 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.932 { 00:15:56.932 "cntlid": 25, 00:15:56.932 "qid": 0, 00:15:56.932 "state": "enabled", 00:15:56.932 "thread": "nvmf_tgt_poll_group_000", 00:15:56.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:56.932 "listen_address": { 00:15:56.932 "trtype": "TCP", 00:15:56.932 "adrfam": "IPv4", 00:15:56.932 "traddr": "10.0.0.2", 00:15:56.932 "trsvcid": "4420" 00:15:56.932 }, 00:15:56.932 "peer_address": { 00:15:56.932 "trtype": "TCP", 00:15:56.932 "adrfam": "IPv4", 00:15:56.932 "traddr": "10.0.0.1", 00:15:56.932 "trsvcid": "33112" 00:15:56.932 }, 00:15:56.932 "auth": { 00:15:56.932 "state": "completed", 00:15:56.932 "digest": "sha256", 00:15:56.932 "dhgroup": "ffdhe4096" 00:15:56.932 } 00:15:56.932 } 00:15:56.932 ]' 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.932 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.192 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:15:57.192 15:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.134 15:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.394 00:15:58.394 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.394 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.394 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.655 { 00:15:58.655 "cntlid": 27, 00:15:58.655 "qid": 0, 00:15:58.655 "state": "enabled", 00:15:58.655 "thread": "nvmf_tgt_poll_group_000", 00:15:58.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:15:58.655 "listen_address": { 00:15:58.655 "trtype": "TCP", 00:15:58.655 "adrfam": "IPv4", 00:15:58.655 "traddr": "10.0.0.2", 00:15:58.655 "trsvcid": "4420" 00:15:58.655 }, 00:15:58.655 "peer_address": { 00:15:58.655 "trtype": "TCP", 00:15:58.655 "adrfam": "IPv4", 00:15:58.655 "traddr": "10.0.0.1", 00:15:58.655 "trsvcid": "33136" 00:15:58.655 }, 00:15:58.655 "auth": { 00:15:58.655 "state": "completed", 00:15:58.655 "digest": "sha256", 00:15:58.655 "dhgroup": "ffdhe4096" 00:15:58.655 } 00:15:58.655 } 00:15:58.655 ]' 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.655 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.915 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:15:58.915 15:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.857 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.119 00:16:00.119 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.119 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.119 15:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.380 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.380 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.380 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.380 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.380 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.380 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.380 { 00:16:00.380 "cntlid": 29, 00:16:00.380 "qid": 0, 00:16:00.380 "state": "enabled", 00:16:00.380 "thread": "nvmf_tgt_poll_group_000", 00:16:00.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:00.380 "listen_address": { 00:16:00.380 "trtype": "TCP", 00:16:00.380 "adrfam": "IPv4", 00:16:00.380 "traddr": "10.0.0.2", 00:16:00.380 "trsvcid": "4420" 00:16:00.380 }, 00:16:00.380 "peer_address": { 00:16:00.380 "trtype": "TCP", 00:16:00.380 "adrfam": "IPv4", 00:16:00.380 "traddr": "10.0.0.1", 00:16:00.380 "trsvcid": "46650" 00:16:00.380 }, 00:16:00.380 "auth": { 00:16:00.380 "state": "completed", 00:16:00.380 "digest": "sha256", 00:16:00.380 "dhgroup": "ffdhe4096" 00:16:00.380 } 00:16:00.380 } 00:16:00.380 ]' 00:16:00.380 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.380 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.380 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.380 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.380 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.380 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.380 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.381 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.716 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:00.716 15:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:01.343 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.343 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:01.343 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.343 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.343 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.343 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.343 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:01.343 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:01.604 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:01.604 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.604 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.604 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:01.604 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:01.604 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.604 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:16:01.604 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.604 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.604 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.604 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:01.604 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.604 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.864 00:16:01.864 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.864 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.864 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.125 { 00:16:02.125 "cntlid": 31, 00:16:02.125 "qid": 0, 00:16:02.125 "state": "enabled", 00:16:02.125 "thread": "nvmf_tgt_poll_group_000", 00:16:02.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:02.125 "listen_address": { 00:16:02.125 "trtype": "TCP", 00:16:02.125 "adrfam": "IPv4", 00:16:02.125 "traddr": "10.0.0.2", 00:16:02.125 "trsvcid": "4420" 00:16:02.125 }, 00:16:02.125 "peer_address": { 00:16:02.125 "trtype": "TCP", 00:16:02.125 "adrfam": "IPv4", 00:16:02.125 "traddr": "10.0.0.1", 00:16:02.125 "trsvcid": "46696" 00:16:02.125 }, 00:16:02.125 "auth": { 00:16:02.125 "state": "completed", 00:16:02.125 "digest": "sha256", 00:16:02.125 "dhgroup": "ffdhe4096" 00:16:02.125 } 00:16:02.125 } 00:16:02.125 ]' 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.125 15:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.386 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:02.386 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:03.329 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.329 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:03.329 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.329 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.329 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.329 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.329 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.329 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.329 15:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:03.329 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:03.329 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.329 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.329 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:03.329 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:03.329 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.329 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.329 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.329 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.329 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.329 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.329 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.329 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.899 00:16:03.899 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.899 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.899 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.899 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.899 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.899 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.899 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.899 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.899 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.899 { 00:16:03.899 "cntlid": 33, 00:16:03.899 "qid": 0, 00:16:03.899 "state": "enabled", 00:16:03.899 "thread": "nvmf_tgt_poll_group_000", 00:16:03.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:03.899 "listen_address": { 00:16:03.899 "trtype": "TCP", 00:16:03.899 "adrfam": "IPv4", 00:16:03.899 "traddr": "10.0.0.2", 00:16:03.899 "trsvcid": "4420" 00:16:03.899 }, 00:16:03.899 "peer_address": { 00:16:03.899 "trtype": "TCP", 00:16:03.899 "adrfam": "IPv4", 00:16:03.899 "traddr": "10.0.0.1", 00:16:03.899 "trsvcid": "46728" 00:16:03.899 }, 00:16:03.899 "auth": { 00:16:03.899 "state": "completed", 00:16:03.899 "digest": "sha256", 00:16:03.899 "dhgroup": "ffdhe6144" 00:16:03.899 } 00:16:03.899 } 00:16:03.899 ]' 00:16:03.899 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.899 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.899 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.160 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.160 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.160 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.160 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.160 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.160 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:04.160 15:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.103 15:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.674 00:16:05.674 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.674 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.674 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.674 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.674 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.674 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.674 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.674 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.674 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.674 { 00:16:05.674 "cntlid": 35, 00:16:05.674 "qid": 0, 00:16:05.674 "state": "enabled", 00:16:05.674 "thread": "nvmf_tgt_poll_group_000", 00:16:05.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:05.674 "listen_address": { 00:16:05.674 "trtype": "TCP", 00:16:05.674 "adrfam": "IPv4", 00:16:05.674 "traddr": "10.0.0.2", 00:16:05.674 "trsvcid": "4420" 00:16:05.674 }, 00:16:05.674 "peer_address": { 00:16:05.674 "trtype": "TCP", 00:16:05.674 "adrfam": "IPv4", 00:16:05.674 "traddr": "10.0.0.1", 00:16:05.674 "trsvcid": "46754" 00:16:05.674 }, 00:16:05.674 "auth": { 00:16:05.674 "state": "completed", 00:16:05.674 "digest": "sha256", 00:16:05.674 "dhgroup": "ffdhe6144" 00:16:05.674 } 00:16:05.674 } 00:16:05.674 ]' 00:16:05.674 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.935 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.935 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.935 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.935 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.935 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.935 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.935 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.195 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:06.195 15:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:06.766 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.766 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:06.766 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.766 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.766 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.766 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.766 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:06.766 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:07.027 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:07.027 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.027 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.027 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:07.027 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:07.027 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.027 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.027 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.027 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.027 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.027 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.027 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.027 15:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.287 00:16:07.287 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.287 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.287 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.548 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.548 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.548 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.548 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.548 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.548 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.548 { 00:16:07.548 "cntlid": 37, 00:16:07.548 "qid": 0, 00:16:07.548 "state": "enabled", 00:16:07.548 "thread": "nvmf_tgt_poll_group_000", 00:16:07.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:07.548 "listen_address": { 00:16:07.548 "trtype": "TCP", 00:16:07.548 "adrfam": "IPv4", 00:16:07.548 "traddr": "10.0.0.2", 00:16:07.548 "trsvcid": "4420" 00:16:07.548 }, 00:16:07.548 "peer_address": { 00:16:07.548 "trtype": "TCP", 00:16:07.548 "adrfam": "IPv4", 00:16:07.548 "traddr": "10.0.0.1", 00:16:07.548 "trsvcid": "46802" 00:16:07.548 }, 00:16:07.548 "auth": { 00:16:07.548 "state": "completed", 00:16:07.548 "digest": "sha256", 00:16:07.548 "dhgroup": "ffdhe6144" 00:16:07.548 } 00:16:07.548 } 00:16:07.548 ]' 00:16:07.548 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.548 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.548 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.549 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.549 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.809 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.809 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.809 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.809 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:07.809 15:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.752 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.324 00:16:09.324 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.324 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.324 15:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.324 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.324 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.324 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.324 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.324 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.324 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.324 { 00:16:09.324 "cntlid": 39, 00:16:09.324 "qid": 0, 00:16:09.324 "state": "enabled", 00:16:09.324 "thread": "nvmf_tgt_poll_group_000", 00:16:09.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:09.324 "listen_address": { 00:16:09.324 "trtype": "TCP", 00:16:09.324 "adrfam": "IPv4", 00:16:09.324 "traddr": "10.0.0.2", 00:16:09.324 "trsvcid": "4420" 00:16:09.324 }, 00:16:09.324 "peer_address": { 00:16:09.324 "trtype": "TCP", 00:16:09.324 "adrfam": "IPv4", 00:16:09.324 "traddr": "10.0.0.1", 00:16:09.324 "trsvcid": "39818" 00:16:09.324 }, 00:16:09.324 "auth": { 00:16:09.324 "state": "completed", 00:16:09.324 "digest": "sha256", 00:16:09.324 "dhgroup": "ffdhe6144" 00:16:09.324 } 00:16:09.324 } 00:16:09.324 ]' 00:16:09.324 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.324 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.324 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.586 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.586 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.586 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.586 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.586 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.586 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:09.586 15:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.530 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.101 00:16:11.101 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.101 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.101 15:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.364 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.364 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.364 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.364 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.364 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.364 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.364 { 00:16:11.364 "cntlid": 41, 00:16:11.364 "qid": 0, 00:16:11.364 "state": "enabled", 00:16:11.364 "thread": "nvmf_tgt_poll_group_000", 00:16:11.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:11.364 "listen_address": { 00:16:11.364 "trtype": "TCP", 00:16:11.364 "adrfam": "IPv4", 00:16:11.364 "traddr": "10.0.0.2", 00:16:11.364 "trsvcid": "4420" 00:16:11.364 }, 00:16:11.364 "peer_address": { 00:16:11.364 "trtype": "TCP", 00:16:11.364 "adrfam": "IPv4", 00:16:11.364 "traddr": "10.0.0.1", 00:16:11.364 "trsvcid": "39844" 00:16:11.364 }, 00:16:11.364 "auth": { 00:16:11.364 "state": "completed", 00:16:11.364 "digest": "sha256", 00:16:11.364 "dhgroup": "ffdhe8192" 00:16:11.364 } 00:16:11.364 } 00:16:11.364 ]' 00:16:11.364 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.364 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.364 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.364 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:11.364 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.627 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.627 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.627 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.627 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:11.627 15:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.569 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.141 00:16:13.141 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.141 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.141 15:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.402 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.402 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.402 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.403 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.403 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.403 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.403 { 00:16:13.403 "cntlid": 43, 00:16:13.403 "qid": 0, 00:16:13.403 "state": "enabled", 00:16:13.403 "thread": "nvmf_tgt_poll_group_000", 00:16:13.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:13.403 "listen_address": { 00:16:13.403 "trtype": "TCP", 00:16:13.403 "adrfam": "IPv4", 00:16:13.403 "traddr": "10.0.0.2", 00:16:13.403 "trsvcid": "4420" 00:16:13.403 }, 00:16:13.403 "peer_address": { 00:16:13.403 "trtype": "TCP", 00:16:13.403 "adrfam": "IPv4", 00:16:13.403 "traddr": "10.0.0.1", 00:16:13.403 "trsvcid": "39850" 00:16:13.403 }, 00:16:13.403 "auth": { 00:16:13.403 "state": "completed", 00:16:13.403 "digest": "sha256", 00:16:13.403 "dhgroup": "ffdhe8192" 00:16:13.403 } 00:16:13.403 } 00:16:13.403 ]' 00:16:13.403 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.403 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.403 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.403 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:13.403 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.403 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.403 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.403 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.663 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:13.663 15:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.606 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.607 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.178 00:16:15.178 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.178 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.178 15:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.439 { 00:16:15.439 "cntlid": 45, 00:16:15.439 "qid": 0, 00:16:15.439 "state": "enabled", 00:16:15.439 "thread": "nvmf_tgt_poll_group_000", 00:16:15.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:15.439 "listen_address": { 00:16:15.439 "trtype": "TCP", 00:16:15.439 "adrfam": "IPv4", 00:16:15.439 "traddr": "10.0.0.2", 00:16:15.439 "trsvcid": "4420" 00:16:15.439 }, 00:16:15.439 "peer_address": { 00:16:15.439 "trtype": "TCP", 00:16:15.439 "adrfam": "IPv4", 00:16:15.439 "traddr": "10.0.0.1", 00:16:15.439 "trsvcid": "39880" 00:16:15.439 }, 00:16:15.439 "auth": { 00:16:15.439 "state": "completed", 00:16:15.439 "digest": "sha256", 00:16:15.439 "dhgroup": "ffdhe8192" 00:16:15.439 } 00:16:15.439 } 00:16:15.439 ]' 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.439 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.699 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:15.699 15:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.641 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.212 00:16:17.212 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.212 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.212 15:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.212 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.212 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.212 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.212 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.212 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.212 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.212 { 00:16:17.212 "cntlid": 47, 00:16:17.212 "qid": 0, 00:16:17.212 "state": "enabled", 00:16:17.212 "thread": "nvmf_tgt_poll_group_000", 00:16:17.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:17.212 "listen_address": { 00:16:17.212 "trtype": "TCP", 00:16:17.212 "adrfam": "IPv4", 00:16:17.212 "traddr": "10.0.0.2", 00:16:17.212 "trsvcid": "4420" 00:16:17.212 }, 00:16:17.212 "peer_address": { 00:16:17.212 "trtype": "TCP", 00:16:17.212 "adrfam": "IPv4", 00:16:17.212 "traddr": "10.0.0.1", 00:16:17.212 "trsvcid": "39896" 00:16:17.212 }, 00:16:17.212 "auth": { 00:16:17.212 "state": "completed", 00:16:17.212 "digest": "sha256", 00:16:17.212 "dhgroup": "ffdhe8192" 00:16:17.212 } 00:16:17.212 } 00:16:17.212 ]' 00:16:17.212 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.472 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.472 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.472 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.472 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.472 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.472 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.472 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.733 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:17.733 15:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:18.303 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.303 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:18.303 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.303 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.303 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.303 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:18.303 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.303 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.303 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:18.303 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:18.563 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:18.563 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.563 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.563 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.563 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:18.563 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.563 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.563 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.563 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.563 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.563 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.563 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.563 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.823 00:16:18.823 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.823 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.823 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.083 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.083 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.083 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.083 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.083 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.083 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.083 { 00:16:19.083 "cntlid": 49, 00:16:19.083 "qid": 0, 00:16:19.083 "state": "enabled", 00:16:19.083 "thread": "nvmf_tgt_poll_group_000", 00:16:19.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:19.083 "listen_address": { 00:16:19.083 "trtype": "TCP", 00:16:19.083 "adrfam": "IPv4", 00:16:19.083 "traddr": "10.0.0.2", 00:16:19.083 "trsvcid": "4420" 00:16:19.083 }, 00:16:19.083 "peer_address": { 00:16:19.083 "trtype": "TCP", 00:16:19.083 "adrfam": "IPv4", 00:16:19.083 "traddr": "10.0.0.1", 00:16:19.083 "trsvcid": "39922" 00:16:19.083 }, 00:16:19.083 "auth": { 00:16:19.083 "state": "completed", 00:16:19.083 "digest": "sha384", 00:16:19.084 "dhgroup": "null" 00:16:19.084 } 00:16:19.084 } 00:16:19.084 ]' 00:16:19.084 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.084 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.084 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.084 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:19.084 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.084 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.084 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.084 15:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.346 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:19.346 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:19.916 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.176 15:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.436 00:16:20.436 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.436 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.436 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.694 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.694 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.694 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.694 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.694 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.694 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.694 { 00:16:20.694 "cntlid": 51, 00:16:20.694 "qid": 0, 00:16:20.694 "state": "enabled", 00:16:20.694 "thread": "nvmf_tgt_poll_group_000", 00:16:20.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:20.695 "listen_address": { 00:16:20.695 "trtype": "TCP", 00:16:20.695 "adrfam": "IPv4", 00:16:20.695 "traddr": "10.0.0.2", 00:16:20.695 "trsvcid": "4420" 00:16:20.695 }, 00:16:20.695 "peer_address": { 00:16:20.695 "trtype": "TCP", 00:16:20.695 "adrfam": "IPv4", 00:16:20.695 "traddr": "10.0.0.1", 00:16:20.695 "trsvcid": "37024" 00:16:20.695 }, 00:16:20.695 "auth": { 00:16:20.695 "state": "completed", 00:16:20.695 "digest": "sha384", 00:16:20.695 "dhgroup": "null" 00:16:20.695 } 00:16:20.695 } 00:16:20.695 ]' 00:16:20.695 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.695 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.695 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.695 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:20.695 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.954 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.954 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.954 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.954 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:20.954 15:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.894 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.154 00:16:22.154 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.154 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.154 15:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.414 { 00:16:22.414 "cntlid": 53, 00:16:22.414 "qid": 0, 00:16:22.414 "state": "enabled", 00:16:22.414 "thread": "nvmf_tgt_poll_group_000", 00:16:22.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:22.414 "listen_address": { 00:16:22.414 "trtype": "TCP", 00:16:22.414 "adrfam": "IPv4", 00:16:22.414 "traddr": "10.0.0.2", 00:16:22.414 "trsvcid": "4420" 00:16:22.414 }, 00:16:22.414 "peer_address": { 00:16:22.414 "trtype": "TCP", 00:16:22.414 "adrfam": "IPv4", 00:16:22.414 "traddr": "10.0.0.1", 00:16:22.414 "trsvcid": "37058" 00:16:22.414 }, 00:16:22.414 "auth": { 00:16:22.414 "state": "completed", 00:16:22.414 "digest": "sha384", 00:16:22.414 "dhgroup": "null" 00:16:22.414 } 00:16:22.414 } 00:16:22.414 ]' 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.414 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.674 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:22.674 15:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:23.615 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.616 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.877 00:16:23.877 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.877 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.877 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.138 { 00:16:24.138 "cntlid": 55, 00:16:24.138 "qid": 0, 00:16:24.138 "state": "enabled", 00:16:24.138 "thread": "nvmf_tgt_poll_group_000", 00:16:24.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:24.138 "listen_address": { 00:16:24.138 "trtype": "TCP", 00:16:24.138 "adrfam": "IPv4", 00:16:24.138 "traddr": "10.0.0.2", 00:16:24.138 "trsvcid": "4420" 00:16:24.138 }, 00:16:24.138 "peer_address": { 00:16:24.138 "trtype": "TCP", 00:16:24.138 "adrfam": "IPv4", 00:16:24.138 "traddr": "10.0.0.1", 00:16:24.138 "trsvcid": "37078" 00:16:24.138 }, 00:16:24.138 "auth": { 00:16:24.138 "state": "completed", 00:16:24.138 "digest": "sha384", 00:16:24.138 "dhgroup": "null" 00:16:24.138 } 00:16:24.138 } 00:16:24.138 ]' 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.138 15:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.397 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:24.397 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:25.337 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.337 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:25.337 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.337 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.337 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.337 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.337 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.337 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.337 15:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:25.337 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:25.337 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.337 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.337 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.337 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:25.337 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.337 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.337 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.337 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.337 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.337 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.337 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.337 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.597 00:16:25.597 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.597 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.597 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.857 { 00:16:25.857 "cntlid": 57, 00:16:25.857 "qid": 0, 00:16:25.857 "state": "enabled", 00:16:25.857 "thread": "nvmf_tgt_poll_group_000", 00:16:25.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:25.857 "listen_address": { 00:16:25.857 "trtype": "TCP", 00:16:25.857 "adrfam": "IPv4", 00:16:25.857 "traddr": "10.0.0.2", 00:16:25.857 "trsvcid": "4420" 00:16:25.857 }, 00:16:25.857 "peer_address": { 00:16:25.857 "trtype": "TCP", 00:16:25.857 "adrfam": "IPv4", 00:16:25.857 "traddr": "10.0.0.1", 00:16:25.857 "trsvcid": "37100" 00:16:25.857 }, 00:16:25.857 "auth": { 00:16:25.857 "state": "completed", 00:16:25.857 "digest": "sha384", 00:16:25.857 "dhgroup": "ffdhe2048" 00:16:25.857 } 00:16:25.857 } 00:16:25.857 ]' 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.857 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.118 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:26.118 15:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.061 15:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.321 00:16:27.321 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.321 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.321 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.582 { 00:16:27.582 "cntlid": 59, 00:16:27.582 "qid": 0, 00:16:27.582 "state": "enabled", 00:16:27.582 "thread": "nvmf_tgt_poll_group_000", 00:16:27.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:27.582 "listen_address": { 00:16:27.582 "trtype": "TCP", 00:16:27.582 "adrfam": "IPv4", 00:16:27.582 "traddr": "10.0.0.2", 00:16:27.582 "trsvcid": "4420" 00:16:27.582 }, 00:16:27.582 "peer_address": { 00:16:27.582 "trtype": "TCP", 00:16:27.582 "adrfam": "IPv4", 00:16:27.582 "traddr": "10.0.0.1", 00:16:27.582 "trsvcid": "37136" 00:16:27.582 }, 00:16:27.582 "auth": { 00:16:27.582 "state": "completed", 00:16:27.582 "digest": "sha384", 00:16:27.582 "dhgroup": "ffdhe2048" 00:16:27.582 } 00:16:27.582 } 00:16:27.582 ]' 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.582 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.843 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:27.843 15:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.784 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.045 00:16:29.045 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.045 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.045 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.306 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.306 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.306 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.306 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.306 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.306 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.306 { 00:16:29.306 "cntlid": 61, 00:16:29.306 "qid": 0, 00:16:29.306 "state": "enabled", 00:16:29.306 "thread": "nvmf_tgt_poll_group_000", 00:16:29.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:29.306 "listen_address": { 00:16:29.306 "trtype": "TCP", 00:16:29.306 "adrfam": "IPv4", 00:16:29.306 "traddr": "10.0.0.2", 00:16:29.306 "trsvcid": "4420" 00:16:29.306 }, 00:16:29.306 "peer_address": { 00:16:29.306 "trtype": "TCP", 00:16:29.306 "adrfam": "IPv4", 00:16:29.306 "traddr": "10.0.0.1", 00:16:29.306 "trsvcid": "49318" 00:16:29.306 }, 00:16:29.306 "auth": { 00:16:29.306 "state": "completed", 00:16:29.306 "digest": "sha384", 00:16:29.306 "dhgroup": "ffdhe2048" 00:16:29.306 } 00:16:29.306 } 00:16:29.306 ]' 00:16:29.306 15:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.306 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.306 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.306 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.306 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.306 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.306 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.306 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.566 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:29.566 15:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.506 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.768 00:16:30.768 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.768 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.768 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.030 { 00:16:31.030 "cntlid": 63, 00:16:31.030 "qid": 0, 00:16:31.030 "state": "enabled", 00:16:31.030 "thread": "nvmf_tgt_poll_group_000", 00:16:31.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:31.030 "listen_address": { 00:16:31.030 "trtype": "TCP", 00:16:31.030 "adrfam": "IPv4", 00:16:31.030 "traddr": "10.0.0.2", 00:16:31.030 "trsvcid": "4420" 00:16:31.030 }, 00:16:31.030 "peer_address": { 00:16:31.030 "trtype": "TCP", 00:16:31.030 "adrfam": "IPv4", 00:16:31.030 "traddr": "10.0.0.1", 00:16:31.030 "trsvcid": "49336" 00:16:31.030 }, 00:16:31.030 "auth": { 00:16:31.030 "state": "completed", 00:16:31.030 "digest": "sha384", 00:16:31.030 "dhgroup": "ffdhe2048" 00:16:31.030 } 00:16:31.030 } 00:16:31.030 ]' 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.030 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.291 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:31.291 15:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.231 15:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.492 00:16:32.492 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.492 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.492 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.754 { 00:16:32.754 "cntlid": 65, 00:16:32.754 "qid": 0, 00:16:32.754 "state": "enabled", 00:16:32.754 "thread": "nvmf_tgt_poll_group_000", 00:16:32.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:32.754 "listen_address": { 00:16:32.754 "trtype": "TCP", 00:16:32.754 "adrfam": "IPv4", 00:16:32.754 "traddr": "10.0.0.2", 00:16:32.754 "trsvcid": "4420" 00:16:32.754 }, 00:16:32.754 "peer_address": { 00:16:32.754 "trtype": "TCP", 00:16:32.754 "adrfam": "IPv4", 00:16:32.754 "traddr": "10.0.0.1", 00:16:32.754 "trsvcid": "49372" 00:16:32.754 }, 00:16:32.754 "auth": { 00:16:32.754 "state": "completed", 00:16:32.754 "digest": "sha384", 00:16:32.754 "dhgroup": "ffdhe3072" 00:16:32.754 } 00:16:32.754 } 00:16:32.754 ]' 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.754 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.015 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:33.015 15:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:33.586 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.847 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.848 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.848 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.848 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.848 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.848 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.109 00:16:34.109 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.109 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.109 15:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.371 { 00:16:34.371 "cntlid": 67, 00:16:34.371 "qid": 0, 00:16:34.371 "state": "enabled", 00:16:34.371 "thread": "nvmf_tgt_poll_group_000", 00:16:34.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:34.371 "listen_address": { 00:16:34.371 "trtype": "TCP", 00:16:34.371 "adrfam": "IPv4", 00:16:34.371 "traddr": "10.0.0.2", 00:16:34.371 "trsvcid": "4420" 00:16:34.371 }, 00:16:34.371 "peer_address": { 00:16:34.371 "trtype": "TCP", 00:16:34.371 "adrfam": "IPv4", 00:16:34.371 "traddr": "10.0.0.1", 00:16:34.371 "trsvcid": "49400" 00:16:34.371 }, 00:16:34.371 "auth": { 00:16:34.371 "state": "completed", 00:16:34.371 "digest": "sha384", 00:16:34.371 "dhgroup": "ffdhe3072" 00:16:34.371 } 00:16:34.371 } 00:16:34.371 ]' 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.371 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.632 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:34.632 15:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.574 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.836 00:16:35.836 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.836 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.836 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.097 { 00:16:36.097 "cntlid": 69, 00:16:36.097 "qid": 0, 00:16:36.097 "state": "enabled", 00:16:36.097 "thread": "nvmf_tgt_poll_group_000", 00:16:36.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:36.097 "listen_address": { 00:16:36.097 "trtype": "TCP", 00:16:36.097 "adrfam": "IPv4", 00:16:36.097 "traddr": "10.0.0.2", 00:16:36.097 "trsvcid": "4420" 00:16:36.097 }, 00:16:36.097 "peer_address": { 00:16:36.097 "trtype": "TCP", 00:16:36.097 "adrfam": "IPv4", 00:16:36.097 "traddr": "10.0.0.1", 00:16:36.097 "trsvcid": "49418" 00:16:36.097 }, 00:16:36.097 "auth": { 00:16:36.097 "state": "completed", 00:16:36.097 "digest": "sha384", 00:16:36.097 "dhgroup": "ffdhe3072" 00:16:36.097 } 00:16:36.097 } 00:16:36.097 ]' 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.097 15:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.359 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:36.359 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:37.301 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.301 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:37.301 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.301 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.301 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.301 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.301 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:37.301 15:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:37.301 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:37.301 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.301 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.301 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.301 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.301 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.301 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:16:37.301 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.301 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.301 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.301 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.301 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.301 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.563 00:16:37.563 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.563 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.563 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.825 { 00:16:37.825 "cntlid": 71, 00:16:37.825 "qid": 0, 00:16:37.825 "state": "enabled", 00:16:37.825 "thread": "nvmf_tgt_poll_group_000", 00:16:37.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:37.825 "listen_address": { 00:16:37.825 "trtype": "TCP", 00:16:37.825 "adrfam": "IPv4", 00:16:37.825 "traddr": "10.0.0.2", 00:16:37.825 "trsvcid": "4420" 00:16:37.825 }, 00:16:37.825 "peer_address": { 00:16:37.825 "trtype": "TCP", 00:16:37.825 "adrfam": "IPv4", 00:16:37.825 "traddr": "10.0.0.1", 00:16:37.825 "trsvcid": "49442" 00:16:37.825 }, 00:16:37.825 "auth": { 00:16:37.825 "state": "completed", 00:16:37.825 "digest": "sha384", 00:16:37.825 "dhgroup": "ffdhe3072" 00:16:37.825 } 00:16:37.825 } 00:16:37.825 ]' 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.825 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.086 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:38.086 15:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.028 15:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.290 00:16:39.290 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.290 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.290 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.551 { 00:16:39.551 "cntlid": 73, 00:16:39.551 "qid": 0, 00:16:39.551 "state": "enabled", 00:16:39.551 "thread": "nvmf_tgt_poll_group_000", 00:16:39.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:39.551 "listen_address": { 00:16:39.551 "trtype": "TCP", 00:16:39.551 "adrfam": "IPv4", 00:16:39.551 "traddr": "10.0.0.2", 00:16:39.551 "trsvcid": "4420" 00:16:39.551 }, 00:16:39.551 "peer_address": { 00:16:39.551 "trtype": "TCP", 00:16:39.551 "adrfam": "IPv4", 00:16:39.551 "traddr": "10.0.0.1", 00:16:39.551 "trsvcid": "57940" 00:16:39.551 }, 00:16:39.551 "auth": { 00:16:39.551 "state": "completed", 00:16:39.551 "digest": "sha384", 00:16:39.551 "dhgroup": "ffdhe4096" 00:16:39.551 } 00:16:39.551 } 00:16:39.551 ]' 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.551 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.811 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:39.811 15:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.753 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.014 00:16:41.014 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.014 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.014 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.379 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.379 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.379 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.379 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.379 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.379 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.379 { 00:16:41.379 "cntlid": 75, 00:16:41.379 "qid": 0, 00:16:41.379 "state": "enabled", 00:16:41.379 "thread": "nvmf_tgt_poll_group_000", 00:16:41.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:41.379 "listen_address": { 00:16:41.379 "trtype": "TCP", 00:16:41.379 "adrfam": "IPv4", 00:16:41.379 "traddr": "10.0.0.2", 00:16:41.379 "trsvcid": "4420" 00:16:41.379 }, 00:16:41.379 "peer_address": { 00:16:41.379 "trtype": "TCP", 00:16:41.379 "adrfam": "IPv4", 00:16:41.379 "traddr": "10.0.0.1", 00:16:41.379 "trsvcid": "57970" 00:16:41.379 }, 00:16:41.379 "auth": { 00:16:41.379 "state": "completed", 00:16:41.379 "digest": "sha384", 00:16:41.379 "dhgroup": "ffdhe4096" 00:16:41.379 } 00:16:41.379 } 00:16:41.379 ]' 00:16:41.379 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.379 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.379 15:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.379 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.379 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.379 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.380 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.380 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.707 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:41.707 15:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:42.278 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.278 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:42.278 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.278 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.278 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.278 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.278 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:42.278 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:42.538 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:42.538 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.538 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:42.538 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.538 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.538 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.539 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.539 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.539 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.539 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.539 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.539 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.539 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.799 00:16:42.799 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.799 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.799 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.060 { 00:16:43.060 "cntlid": 77, 00:16:43.060 "qid": 0, 00:16:43.060 "state": "enabled", 00:16:43.060 "thread": "nvmf_tgt_poll_group_000", 00:16:43.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:43.060 "listen_address": { 00:16:43.060 "trtype": "TCP", 00:16:43.060 "adrfam": "IPv4", 00:16:43.060 "traddr": "10.0.0.2", 00:16:43.060 "trsvcid": "4420" 00:16:43.060 }, 00:16:43.060 "peer_address": { 00:16:43.060 "trtype": "TCP", 00:16:43.060 "adrfam": "IPv4", 00:16:43.060 "traddr": "10.0.0.1", 00:16:43.060 "trsvcid": "58002" 00:16:43.060 }, 00:16:43.060 "auth": { 00:16:43.060 "state": "completed", 00:16:43.060 "digest": "sha384", 00:16:43.060 "dhgroup": "ffdhe4096" 00:16:43.060 } 00:16:43.060 } 00:16:43.060 ]' 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.060 15:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.321 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:43.321 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.267 15:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.527 00:16:44.527 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.527 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.527 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.810 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.810 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.810 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.810 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.810 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.810 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.810 { 00:16:44.810 "cntlid": 79, 00:16:44.810 "qid": 0, 00:16:44.810 "state": "enabled", 00:16:44.810 "thread": "nvmf_tgt_poll_group_000", 00:16:44.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:44.810 "listen_address": { 00:16:44.810 "trtype": "TCP", 00:16:44.810 "adrfam": "IPv4", 00:16:44.810 "traddr": "10.0.0.2", 00:16:44.810 "trsvcid": "4420" 00:16:44.810 }, 00:16:44.810 "peer_address": { 00:16:44.810 "trtype": "TCP", 00:16:44.810 "adrfam": "IPv4", 00:16:44.810 "traddr": "10.0.0.1", 00:16:44.810 "trsvcid": "58036" 00:16:44.811 }, 00:16:44.811 "auth": { 00:16:44.811 "state": "completed", 00:16:44.811 "digest": "sha384", 00:16:44.811 "dhgroup": "ffdhe4096" 00:16:44.811 } 00:16:44.811 } 00:16:44.811 ]' 00:16:44.811 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.811 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.811 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.811 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.811 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.811 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.811 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.811 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.072 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:45.072 15:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:45.643 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.643 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:45.643 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.643 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.643 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.643 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.643 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.643 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:45.643 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:45.907 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:45.907 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.907 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.907 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:45.907 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.907 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.907 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.907 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.907 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.907 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.907 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.907 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.907 15:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.481 00:16:46.481 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.481 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.481 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.481 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.481 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.481 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.481 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.481 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.481 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.481 { 00:16:46.481 "cntlid": 81, 00:16:46.481 "qid": 0, 00:16:46.481 "state": "enabled", 00:16:46.481 "thread": "nvmf_tgt_poll_group_000", 00:16:46.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:46.481 "listen_address": { 00:16:46.481 "trtype": "TCP", 00:16:46.481 "adrfam": "IPv4", 00:16:46.481 "traddr": "10.0.0.2", 00:16:46.481 "trsvcid": "4420" 00:16:46.481 }, 00:16:46.481 "peer_address": { 00:16:46.481 "trtype": "TCP", 00:16:46.481 "adrfam": "IPv4", 00:16:46.481 "traddr": "10.0.0.1", 00:16:46.481 "trsvcid": "58062" 00:16:46.481 }, 00:16:46.481 "auth": { 00:16:46.481 "state": "completed", 00:16:46.481 "digest": "sha384", 00:16:46.481 "dhgroup": "ffdhe6144" 00:16:46.481 } 00:16:46.481 } 00:16:46.481 ]' 00:16:46.481 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.482 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.482 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.482 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.742 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.742 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.742 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.742 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.742 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:46.742 15:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:47.684 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.684 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:47.684 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.685 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.257 00:16:48.257 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.257 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.257 15:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.257 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.257 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.257 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.257 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.257 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.257 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.257 { 00:16:48.257 "cntlid": 83, 00:16:48.257 "qid": 0, 00:16:48.257 "state": "enabled", 00:16:48.257 "thread": "nvmf_tgt_poll_group_000", 00:16:48.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:48.257 "listen_address": { 00:16:48.257 "trtype": "TCP", 00:16:48.257 "adrfam": "IPv4", 00:16:48.257 "traddr": "10.0.0.2", 00:16:48.257 "trsvcid": "4420" 00:16:48.257 }, 00:16:48.257 "peer_address": { 00:16:48.257 "trtype": "TCP", 00:16:48.257 "adrfam": "IPv4", 00:16:48.257 "traddr": "10.0.0.1", 00:16:48.257 "trsvcid": "58078" 00:16:48.257 }, 00:16:48.257 "auth": { 00:16:48.257 "state": "completed", 00:16:48.257 "digest": "sha384", 00:16:48.257 "dhgroup": "ffdhe6144" 00:16:48.257 } 00:16:48.257 } 00:16:48.257 ]' 00:16:48.257 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.518 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.518 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.518 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.518 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.518 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.518 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.518 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.778 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:48.778 15:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:49.347 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.347 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:49.347 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.347 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.347 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.347 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.347 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:49.347 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:49.607 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:49.607 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.607 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.607 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.607 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.607 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.607 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.607 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.607 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.607 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.607 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.607 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.607 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.868 00:16:50.129 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.129 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.129 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.129 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.129 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.129 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.129 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.129 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.129 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.129 { 00:16:50.129 "cntlid": 85, 00:16:50.129 "qid": 0, 00:16:50.129 "state": "enabled", 00:16:50.129 "thread": "nvmf_tgt_poll_group_000", 00:16:50.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:50.129 "listen_address": { 00:16:50.129 "trtype": "TCP", 00:16:50.129 "adrfam": "IPv4", 00:16:50.129 "traddr": "10.0.0.2", 00:16:50.129 "trsvcid": "4420" 00:16:50.129 }, 00:16:50.129 "peer_address": { 00:16:50.129 "trtype": "TCP", 00:16:50.129 "adrfam": "IPv4", 00:16:50.129 "traddr": "10.0.0.1", 00:16:50.129 "trsvcid": "46214" 00:16:50.129 }, 00:16:50.129 "auth": { 00:16:50.129 "state": "completed", 00:16:50.129 "digest": "sha384", 00:16:50.129 "dhgroup": "ffdhe6144" 00:16:50.129 } 00:16:50.129 } 00:16:50.129 ]' 00:16:50.129 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.129 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.129 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.390 15:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.390 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.390 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.390 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.390 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.390 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:50.390 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:51.332 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.332 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:51.332 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.332 15:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.332 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.902 00:16:51.902 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.902 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.902 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.902 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.902 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.902 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.902 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.902 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.902 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.902 { 00:16:51.902 "cntlid": 87, 00:16:51.902 "qid": 0, 00:16:51.902 "state": "enabled", 00:16:51.902 "thread": "nvmf_tgt_poll_group_000", 00:16:51.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:51.902 "listen_address": { 00:16:51.902 "trtype": "TCP", 00:16:51.902 "adrfam": "IPv4", 00:16:51.902 "traddr": "10.0.0.2", 00:16:51.902 "trsvcid": "4420" 00:16:51.902 }, 00:16:51.902 "peer_address": { 00:16:51.902 "trtype": "TCP", 00:16:51.902 "adrfam": "IPv4", 00:16:51.902 "traddr": "10.0.0.1", 00:16:51.902 "trsvcid": "46244" 00:16:51.902 }, 00:16:51.902 "auth": { 00:16:51.902 "state": "completed", 00:16:51.902 "digest": "sha384", 00:16:51.902 "dhgroup": "ffdhe6144" 00:16:51.902 } 00:16:51.902 } 00:16:51.902 ]' 00:16:51.902 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.162 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.162 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.162 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.162 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.162 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.162 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.162 15:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.423 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:52.423 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:16:52.991 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.991 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:52.991 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.991 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.991 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.991 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.991 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.991 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:52.991 15:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:53.251 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:53.251 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.251 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.251 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:53.251 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.251 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.251 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.251 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.251 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.251 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.251 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.251 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.251 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.820 00:16:53.820 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.820 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.820 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.080 { 00:16:54.080 "cntlid": 89, 00:16:54.080 "qid": 0, 00:16:54.080 "state": "enabled", 00:16:54.080 "thread": "nvmf_tgt_poll_group_000", 00:16:54.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:54.080 "listen_address": { 00:16:54.080 "trtype": "TCP", 00:16:54.080 "adrfam": "IPv4", 00:16:54.080 "traddr": "10.0.0.2", 00:16:54.080 "trsvcid": "4420" 00:16:54.080 }, 00:16:54.080 "peer_address": { 00:16:54.080 "trtype": "TCP", 00:16:54.080 "adrfam": "IPv4", 00:16:54.080 "traddr": "10.0.0.1", 00:16:54.080 "trsvcid": "46286" 00:16:54.080 }, 00:16:54.080 "auth": { 00:16:54.080 "state": "completed", 00:16:54.080 "digest": "sha384", 00:16:54.080 "dhgroup": "ffdhe8192" 00:16:54.080 } 00:16:54.080 } 00:16:54.080 ]' 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.080 15:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.341 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:54.341 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:16:55.280 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.280 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.281 15:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.851 00:16:55.851 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.851 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.851 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.851 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.851 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.851 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.851 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.111 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.111 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.111 { 00:16:56.111 "cntlid": 91, 00:16:56.111 "qid": 0, 00:16:56.111 "state": "enabled", 00:16:56.111 "thread": "nvmf_tgt_poll_group_000", 00:16:56.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:56.111 "listen_address": { 00:16:56.111 "trtype": "TCP", 00:16:56.111 "adrfam": "IPv4", 00:16:56.111 "traddr": "10.0.0.2", 00:16:56.111 "trsvcid": "4420" 00:16:56.111 }, 00:16:56.111 "peer_address": { 00:16:56.111 "trtype": "TCP", 00:16:56.111 "adrfam": "IPv4", 00:16:56.111 "traddr": "10.0.0.1", 00:16:56.111 "trsvcid": "46304" 00:16:56.111 }, 00:16:56.111 "auth": { 00:16:56.111 "state": "completed", 00:16:56.111 "digest": "sha384", 00:16:56.111 "dhgroup": "ffdhe8192" 00:16:56.111 } 00:16:56.111 } 00:16:56.111 ]' 00:16:56.111 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.111 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.111 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.111 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.111 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.111 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.111 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.111 15:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.371 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:56.371 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:16:56.940 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.200 15:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.768 00:16:57.768 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.768 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.768 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.027 { 00:16:58.027 "cntlid": 93, 00:16:58.027 "qid": 0, 00:16:58.027 "state": "enabled", 00:16:58.027 "thread": "nvmf_tgt_poll_group_000", 00:16:58.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:16:58.027 "listen_address": { 00:16:58.027 "trtype": "TCP", 00:16:58.027 "adrfam": "IPv4", 00:16:58.027 "traddr": "10.0.0.2", 00:16:58.027 "trsvcid": "4420" 00:16:58.027 }, 00:16:58.027 "peer_address": { 00:16:58.027 "trtype": "TCP", 00:16:58.027 "adrfam": "IPv4", 00:16:58.027 "traddr": "10.0.0.1", 00:16:58.027 "trsvcid": "46342" 00:16:58.027 }, 00:16:58.027 "auth": { 00:16:58.027 "state": "completed", 00:16:58.027 "digest": "sha384", 00:16:58.027 "dhgroup": "ffdhe8192" 00:16:58.027 } 00:16:58.027 } 00:16:58.027 ]' 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.027 15:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.286 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:58.286 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.225 15:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.225 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.225 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.225 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.225 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.795 00:16:59.795 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.795 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.795 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.056 { 00:17:00.056 "cntlid": 95, 00:17:00.056 "qid": 0, 00:17:00.056 "state": "enabled", 00:17:00.056 "thread": "nvmf_tgt_poll_group_000", 00:17:00.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:00.056 "listen_address": { 00:17:00.056 "trtype": "TCP", 00:17:00.056 "adrfam": "IPv4", 00:17:00.056 "traddr": "10.0.0.2", 00:17:00.056 "trsvcid": "4420" 00:17:00.056 }, 00:17:00.056 "peer_address": { 00:17:00.056 "trtype": "TCP", 00:17:00.056 "adrfam": "IPv4", 00:17:00.056 "traddr": "10.0.0.1", 00:17:00.056 "trsvcid": "60548" 00:17:00.056 }, 00:17:00.056 "auth": { 00:17:00.056 "state": "completed", 00:17:00.056 "digest": "sha384", 00:17:00.056 "dhgroup": "ffdhe8192" 00:17:00.056 } 00:17:00.056 } 00:17:00.056 ]' 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.056 15:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.316 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:00.316 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.255 15:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.516 00:17:01.516 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.516 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.516 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.516 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.516 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.516 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.516 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.516 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.516 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.516 { 00:17:01.516 "cntlid": 97, 00:17:01.516 "qid": 0, 00:17:01.516 "state": "enabled", 00:17:01.516 "thread": "nvmf_tgt_poll_group_000", 00:17:01.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:01.516 "listen_address": { 00:17:01.516 "trtype": "TCP", 00:17:01.516 "adrfam": "IPv4", 00:17:01.516 "traddr": "10.0.0.2", 00:17:01.516 "trsvcid": "4420" 00:17:01.516 }, 00:17:01.516 "peer_address": { 00:17:01.516 "trtype": "TCP", 00:17:01.516 "adrfam": "IPv4", 00:17:01.516 "traddr": "10.0.0.1", 00:17:01.516 "trsvcid": "60582" 00:17:01.516 }, 00:17:01.516 "auth": { 00:17:01.516 "state": "completed", 00:17:01.516 "digest": "sha512", 00:17:01.516 "dhgroup": "null" 00:17:01.516 } 00:17:01.516 } 00:17:01.516 ]' 00:17:01.516 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.777 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.777 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.777 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.777 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.777 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.777 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.777 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.037 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:02.038 15:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:02.609 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.609 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:02.609 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.609 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.609 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.609 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.609 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.609 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.870 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:02.870 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.870 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.870 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.870 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:02.870 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.870 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.870 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.870 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.870 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.870 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.870 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.871 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.131 00:17:03.131 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.131 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.131 15:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.393 { 00:17:03.393 "cntlid": 99, 00:17:03.393 "qid": 0, 00:17:03.393 "state": "enabled", 00:17:03.393 "thread": "nvmf_tgt_poll_group_000", 00:17:03.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:03.393 "listen_address": { 00:17:03.393 "trtype": "TCP", 00:17:03.393 "adrfam": "IPv4", 00:17:03.393 "traddr": "10.0.0.2", 00:17:03.393 "trsvcid": "4420" 00:17:03.393 }, 00:17:03.393 "peer_address": { 00:17:03.393 "trtype": "TCP", 00:17:03.393 "adrfam": "IPv4", 00:17:03.393 "traddr": "10.0.0.1", 00:17:03.393 "trsvcid": "60622" 00:17:03.393 }, 00:17:03.393 "auth": { 00:17:03.393 "state": "completed", 00:17:03.393 "digest": "sha512", 00:17:03.393 "dhgroup": "null" 00:17:03.393 } 00:17:03.393 } 00:17:03.393 ]' 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.393 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.653 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:17:03.653 15:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.594 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.855 00:17:04.855 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.855 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.855 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.116 { 00:17:05.116 "cntlid": 101, 00:17:05.116 "qid": 0, 00:17:05.116 "state": "enabled", 00:17:05.116 "thread": "nvmf_tgt_poll_group_000", 00:17:05.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:05.116 "listen_address": { 00:17:05.116 "trtype": "TCP", 00:17:05.116 "adrfam": "IPv4", 00:17:05.116 "traddr": "10.0.0.2", 00:17:05.116 "trsvcid": "4420" 00:17:05.116 }, 00:17:05.116 "peer_address": { 00:17:05.116 "trtype": "TCP", 00:17:05.116 "adrfam": "IPv4", 00:17:05.116 "traddr": "10.0.0.1", 00:17:05.116 "trsvcid": "60666" 00:17:05.116 }, 00:17:05.116 "auth": { 00:17:05.116 "state": "completed", 00:17:05.116 "digest": "sha512", 00:17:05.116 "dhgroup": "null" 00:17:05.116 } 00:17:05.116 } 00:17:05.116 ]' 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.116 15:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.377 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:17:05.377 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:17:05.948 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.209 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:06.209 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.209 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.209 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.209 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.209 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:06.209 15:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:06.209 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:06.209 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.209 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.209 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:06.209 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.209 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.209 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:17:06.209 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.209 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.209 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.209 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.209 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.209 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.470 00:17:06.470 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.470 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.470 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.731 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.731 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.731 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.731 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.731 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.731 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.731 { 00:17:06.731 "cntlid": 103, 00:17:06.731 "qid": 0, 00:17:06.731 "state": "enabled", 00:17:06.731 "thread": "nvmf_tgt_poll_group_000", 00:17:06.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:06.731 "listen_address": { 00:17:06.731 "trtype": "TCP", 00:17:06.731 "adrfam": "IPv4", 00:17:06.731 "traddr": "10.0.0.2", 00:17:06.731 "trsvcid": "4420" 00:17:06.731 }, 00:17:06.731 "peer_address": { 00:17:06.731 "trtype": "TCP", 00:17:06.731 "adrfam": "IPv4", 00:17:06.731 "traddr": "10.0.0.1", 00:17:06.731 "trsvcid": "60690" 00:17:06.731 }, 00:17:06.731 "auth": { 00:17:06.731 "state": "completed", 00:17:06.731 "digest": "sha512", 00:17:06.731 "dhgroup": "null" 00:17:06.731 } 00:17:06.731 } 00:17:06.731 ]' 00:17:06.731 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.731 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.731 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.731 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:06.731 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.993 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.993 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.993 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.993 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:06.993 15:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.937 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.198 00:17:08.198 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.198 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.198 15:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.459 { 00:17:08.459 "cntlid": 105, 00:17:08.459 "qid": 0, 00:17:08.459 "state": "enabled", 00:17:08.459 "thread": "nvmf_tgt_poll_group_000", 00:17:08.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:08.459 "listen_address": { 00:17:08.459 "trtype": "TCP", 00:17:08.459 "adrfam": "IPv4", 00:17:08.459 "traddr": "10.0.0.2", 00:17:08.459 "trsvcid": "4420" 00:17:08.459 }, 00:17:08.459 "peer_address": { 00:17:08.459 "trtype": "TCP", 00:17:08.459 "adrfam": "IPv4", 00:17:08.459 "traddr": "10.0.0.1", 00:17:08.459 "trsvcid": "60708" 00:17:08.459 }, 00:17:08.459 "auth": { 00:17:08.459 "state": "completed", 00:17:08.459 "digest": "sha512", 00:17:08.459 "dhgroup": "ffdhe2048" 00:17:08.459 } 00:17:08.459 } 00:17:08.459 ]' 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.459 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.720 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:08.720 15:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.665 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.926 00:17:09.926 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.926 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.926 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.187 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.187 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.187 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.187 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.187 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.187 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.187 { 00:17:10.187 "cntlid": 107, 00:17:10.187 "qid": 0, 00:17:10.187 "state": "enabled", 00:17:10.187 "thread": "nvmf_tgt_poll_group_000", 00:17:10.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:10.187 "listen_address": { 00:17:10.187 "trtype": "TCP", 00:17:10.187 "adrfam": "IPv4", 00:17:10.187 "traddr": "10.0.0.2", 00:17:10.187 "trsvcid": "4420" 00:17:10.187 }, 00:17:10.187 "peer_address": { 00:17:10.187 "trtype": "TCP", 00:17:10.187 "adrfam": "IPv4", 00:17:10.187 "traddr": "10.0.0.1", 00:17:10.187 "trsvcid": "44818" 00:17:10.187 }, 00:17:10.187 "auth": { 00:17:10.187 "state": "completed", 00:17:10.187 "digest": "sha512", 00:17:10.187 "dhgroup": "ffdhe2048" 00:17:10.187 } 00:17:10.187 } 00:17:10.187 ]' 00:17:10.187 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.187 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.187 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.187 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.187 15:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.187 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.187 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.187 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.448 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:17:10.448 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:17:11.390 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.390 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:11.390 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.390 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.390 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.390 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.390 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.390 15:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.390 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:11.390 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.390 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.390 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:11.390 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.390 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.390 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.390 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.390 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.390 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.390 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.390 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.390 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.651 00:17:11.651 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.651 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.651 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.912 { 00:17:11.912 "cntlid": 109, 00:17:11.912 "qid": 0, 00:17:11.912 "state": "enabled", 00:17:11.912 "thread": "nvmf_tgt_poll_group_000", 00:17:11.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:11.912 "listen_address": { 00:17:11.912 "trtype": "TCP", 00:17:11.912 "adrfam": "IPv4", 00:17:11.912 "traddr": "10.0.0.2", 00:17:11.912 "trsvcid": "4420" 00:17:11.912 }, 00:17:11.912 "peer_address": { 00:17:11.912 "trtype": "TCP", 00:17:11.912 "adrfam": "IPv4", 00:17:11.912 "traddr": "10.0.0.1", 00:17:11.912 "trsvcid": "44848" 00:17:11.912 }, 00:17:11.912 "auth": { 00:17:11.912 "state": "completed", 00:17:11.912 "digest": "sha512", 00:17:11.912 "dhgroup": "ffdhe2048" 00:17:11.912 } 00:17:11.912 } 00:17:11.912 ]' 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.912 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.173 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:17:12.174 15:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:13.115 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.116 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:17:13.116 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.116 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.116 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.116 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.116 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.116 15:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.377 00:17:13.377 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.377 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.377 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.638 { 00:17:13.638 "cntlid": 111, 00:17:13.638 "qid": 0, 00:17:13.638 "state": "enabled", 00:17:13.638 "thread": "nvmf_tgt_poll_group_000", 00:17:13.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:13.638 "listen_address": { 00:17:13.638 "trtype": "TCP", 00:17:13.638 "adrfam": "IPv4", 00:17:13.638 "traddr": "10.0.0.2", 00:17:13.638 "trsvcid": "4420" 00:17:13.638 }, 00:17:13.638 "peer_address": { 00:17:13.638 "trtype": "TCP", 00:17:13.638 "adrfam": "IPv4", 00:17:13.638 "traddr": "10.0.0.1", 00:17:13.638 "trsvcid": "44870" 00:17:13.638 }, 00:17:13.638 "auth": { 00:17:13.638 "state": "completed", 00:17:13.638 "digest": "sha512", 00:17:13.638 "dhgroup": "ffdhe2048" 00:17:13.638 } 00:17:13.638 } 00:17:13.638 ]' 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.638 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.898 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:13.898 15:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.837 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.838 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.838 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.838 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.838 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.098 00:17:15.098 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.098 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.098 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.358 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.358 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.358 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.358 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.358 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.358 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.358 { 00:17:15.358 "cntlid": 113, 00:17:15.358 "qid": 0, 00:17:15.358 "state": "enabled", 00:17:15.358 "thread": "nvmf_tgt_poll_group_000", 00:17:15.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:15.358 "listen_address": { 00:17:15.358 "trtype": "TCP", 00:17:15.358 "adrfam": "IPv4", 00:17:15.358 "traddr": "10.0.0.2", 00:17:15.358 "trsvcid": "4420" 00:17:15.358 }, 00:17:15.358 "peer_address": { 00:17:15.358 "trtype": "TCP", 00:17:15.358 "adrfam": "IPv4", 00:17:15.358 "traddr": "10.0.0.1", 00:17:15.358 "trsvcid": "44894" 00:17:15.358 }, 00:17:15.358 "auth": { 00:17:15.358 "state": "completed", 00:17:15.358 "digest": "sha512", 00:17:15.358 "dhgroup": "ffdhe3072" 00:17:15.358 } 00:17:15.358 } 00:17:15.358 ]' 00:17:15.358 15:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.358 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.358 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.358 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.358 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.358 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.358 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.358 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.618 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:15.619 15:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:16.191 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.451 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:16.451 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.451 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.451 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.451 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.451 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.451 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.451 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:16.451 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.451 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.451 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:16.451 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.451 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.452 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.452 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.452 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.452 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.452 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.452 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.713 00:17:16.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.713 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.973 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.973 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.973 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.973 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.973 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.973 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.973 { 00:17:16.973 "cntlid": 115, 00:17:16.973 "qid": 0, 00:17:16.973 "state": "enabled", 00:17:16.973 "thread": "nvmf_tgt_poll_group_000", 00:17:16.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:16.973 "listen_address": { 00:17:16.973 "trtype": "TCP", 00:17:16.973 "adrfam": "IPv4", 00:17:16.973 "traddr": "10.0.0.2", 00:17:16.973 "trsvcid": "4420" 00:17:16.973 }, 00:17:16.973 "peer_address": { 00:17:16.973 "trtype": "TCP", 00:17:16.973 "adrfam": "IPv4", 00:17:16.973 "traddr": "10.0.0.1", 00:17:16.973 "trsvcid": "44914" 00:17:16.973 }, 00:17:16.973 "auth": { 00:17:16.973 "state": "completed", 00:17:16.973 "digest": "sha512", 00:17:16.973 "dhgroup": "ffdhe3072" 00:17:16.973 } 00:17:16.973 } 00:17:16.973 ]' 00:17:16.973 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.973 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.973 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.973 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.973 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.234 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.234 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.234 15:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.234 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:17:17.234 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.175 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.176 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.176 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.176 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.176 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.176 15:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.435 00:17:18.435 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.435 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.435 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.696 { 00:17:18.696 "cntlid": 117, 00:17:18.696 "qid": 0, 00:17:18.696 "state": "enabled", 00:17:18.696 "thread": "nvmf_tgt_poll_group_000", 00:17:18.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:18.696 "listen_address": { 00:17:18.696 "trtype": "TCP", 00:17:18.696 "adrfam": "IPv4", 00:17:18.696 "traddr": "10.0.0.2", 00:17:18.696 "trsvcid": "4420" 00:17:18.696 }, 00:17:18.696 "peer_address": { 00:17:18.696 "trtype": "TCP", 00:17:18.696 "adrfam": "IPv4", 00:17:18.696 "traddr": "10.0.0.1", 00:17:18.696 "trsvcid": "44938" 00:17:18.696 }, 00:17:18.696 "auth": { 00:17:18.696 "state": "completed", 00:17:18.696 "digest": "sha512", 00:17:18.696 "dhgroup": "ffdhe3072" 00:17:18.696 } 00:17:18.696 } 00:17:18.696 ]' 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.696 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.957 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:17:18.957 15:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.899 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.160 00:17:20.160 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.160 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.160 15:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.421 { 00:17:20.421 "cntlid": 119, 00:17:20.421 "qid": 0, 00:17:20.421 "state": "enabled", 00:17:20.421 "thread": "nvmf_tgt_poll_group_000", 00:17:20.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:20.421 "listen_address": { 00:17:20.421 "trtype": "TCP", 00:17:20.421 "adrfam": "IPv4", 00:17:20.421 "traddr": "10.0.0.2", 00:17:20.421 "trsvcid": "4420" 00:17:20.421 }, 00:17:20.421 "peer_address": { 00:17:20.421 "trtype": "TCP", 00:17:20.421 "adrfam": "IPv4", 00:17:20.421 "traddr": "10.0.0.1", 00:17:20.421 "trsvcid": "36404" 00:17:20.421 }, 00:17:20.421 "auth": { 00:17:20.421 "state": "completed", 00:17:20.421 "digest": "sha512", 00:17:20.421 "dhgroup": "ffdhe3072" 00:17:20.421 } 00:17:20.421 } 00:17:20.421 ]' 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.421 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.753 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:20.753 15:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:21.347 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.608 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.869 00:17:21.869 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.870 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.870 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.131 { 00:17:22.131 "cntlid": 121, 00:17:22.131 "qid": 0, 00:17:22.131 "state": "enabled", 00:17:22.131 "thread": "nvmf_tgt_poll_group_000", 00:17:22.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:22.131 "listen_address": { 00:17:22.131 "trtype": "TCP", 00:17:22.131 "adrfam": "IPv4", 00:17:22.131 "traddr": "10.0.0.2", 00:17:22.131 "trsvcid": "4420" 00:17:22.131 }, 00:17:22.131 "peer_address": { 00:17:22.131 "trtype": "TCP", 00:17:22.131 "adrfam": "IPv4", 00:17:22.131 "traddr": "10.0.0.1", 00:17:22.131 "trsvcid": "36430" 00:17:22.131 }, 00:17:22.131 "auth": { 00:17:22.131 "state": "completed", 00:17:22.131 "digest": "sha512", 00:17:22.131 "dhgroup": "ffdhe4096" 00:17:22.131 } 00:17:22.131 } 00:17:22.131 ]' 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.131 15:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.392 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:22.392 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:23.340 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.340 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:23.340 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.340 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.340 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.340 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.340 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.340 15:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.340 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:23.340 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.340 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.340 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:23.340 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:23.340 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.340 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.340 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.340 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.340 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.340 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.340 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.341 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.600 00:17:23.600 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.600 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.600 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.861 { 00:17:23.861 "cntlid": 123, 00:17:23.861 "qid": 0, 00:17:23.861 "state": "enabled", 00:17:23.861 "thread": "nvmf_tgt_poll_group_000", 00:17:23.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:23.861 "listen_address": { 00:17:23.861 "trtype": "TCP", 00:17:23.861 "adrfam": "IPv4", 00:17:23.861 "traddr": "10.0.0.2", 00:17:23.861 "trsvcid": "4420" 00:17:23.861 }, 00:17:23.861 "peer_address": { 00:17:23.861 "trtype": "TCP", 00:17:23.861 "adrfam": "IPv4", 00:17:23.861 "traddr": "10.0.0.1", 00:17:23.861 "trsvcid": "36444" 00:17:23.861 }, 00:17:23.861 "auth": { 00:17:23.861 "state": "completed", 00:17:23.861 "digest": "sha512", 00:17:23.861 "dhgroup": "ffdhe4096" 00:17:23.861 } 00:17:23.861 } 00:17:23.861 ]' 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.861 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.121 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:17:24.121 15:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.060 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.061 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.061 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.061 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.061 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.061 15:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.319 00:17:25.319 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.319 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.319 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.579 { 00:17:25.579 "cntlid": 125, 00:17:25.579 "qid": 0, 00:17:25.579 "state": "enabled", 00:17:25.579 "thread": "nvmf_tgt_poll_group_000", 00:17:25.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:25.579 "listen_address": { 00:17:25.579 "trtype": "TCP", 00:17:25.579 "adrfam": "IPv4", 00:17:25.579 "traddr": "10.0.0.2", 00:17:25.579 "trsvcid": "4420" 00:17:25.579 }, 00:17:25.579 "peer_address": { 00:17:25.579 "trtype": "TCP", 00:17:25.579 "adrfam": "IPv4", 00:17:25.579 "traddr": "10.0.0.1", 00:17:25.579 "trsvcid": "36474" 00:17:25.579 }, 00:17:25.579 "auth": { 00:17:25.579 "state": "completed", 00:17:25.579 "digest": "sha512", 00:17:25.579 "dhgroup": "ffdhe4096" 00:17:25.579 } 00:17:25.579 } 00:17:25.579 ]' 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.579 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.838 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:17:25.839 15:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.781 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.040 00:17:27.040 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.040 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.040 15:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.300 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.300 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.300 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.300 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.300 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.300 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.300 { 00:17:27.300 "cntlid": 127, 00:17:27.300 "qid": 0, 00:17:27.300 "state": "enabled", 00:17:27.300 "thread": "nvmf_tgt_poll_group_000", 00:17:27.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:27.300 "listen_address": { 00:17:27.300 "trtype": "TCP", 00:17:27.300 "adrfam": "IPv4", 00:17:27.300 "traddr": "10.0.0.2", 00:17:27.300 "trsvcid": "4420" 00:17:27.300 }, 00:17:27.300 "peer_address": { 00:17:27.300 "trtype": "TCP", 00:17:27.300 "adrfam": "IPv4", 00:17:27.300 "traddr": "10.0.0.1", 00:17:27.300 "trsvcid": "36488" 00:17:27.300 }, 00:17:27.300 "auth": { 00:17:27.300 "state": "completed", 00:17:27.300 "digest": "sha512", 00:17:27.300 "dhgroup": "ffdhe4096" 00:17:27.300 } 00:17:27.300 } 00:17:27.300 ]' 00:17:27.300 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.300 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.300 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.300 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.300 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.559 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.559 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.559 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.559 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:27.559 15:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.499 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.068 00:17:29.068 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.068 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.068 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.068 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.068 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.068 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.068 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.068 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.068 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.068 { 00:17:29.068 "cntlid": 129, 00:17:29.068 "qid": 0, 00:17:29.068 "state": "enabled", 00:17:29.068 "thread": "nvmf_tgt_poll_group_000", 00:17:29.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:29.068 "listen_address": { 00:17:29.068 "trtype": "TCP", 00:17:29.068 "adrfam": "IPv4", 00:17:29.068 "traddr": "10.0.0.2", 00:17:29.068 "trsvcid": "4420" 00:17:29.068 }, 00:17:29.068 "peer_address": { 00:17:29.068 "trtype": "TCP", 00:17:29.068 "adrfam": "IPv4", 00:17:29.068 "traddr": "10.0.0.1", 00:17:29.068 "trsvcid": "36504" 00:17:29.068 }, 00:17:29.068 "auth": { 00:17:29.068 "state": "completed", 00:17:29.068 "digest": "sha512", 00:17:29.068 "dhgroup": "ffdhe6144" 00:17:29.068 } 00:17:29.068 } 00:17:29.068 ]' 00:17:29.068 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.068 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.068 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.327 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.327 15:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.327 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.327 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.327 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.327 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:29.327 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:30.265 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.265 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:30.265 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.265 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.265 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.265 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.265 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.265 15:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.526 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:30.526 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.526 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.526 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:30.526 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.526 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.526 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.526 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.526 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.526 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.526 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.526 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.526 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.786 00:17:30.786 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.786 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.786 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.047 { 00:17:31.047 "cntlid": 131, 00:17:31.047 "qid": 0, 00:17:31.047 "state": "enabled", 00:17:31.047 "thread": "nvmf_tgt_poll_group_000", 00:17:31.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:31.047 "listen_address": { 00:17:31.047 "trtype": "TCP", 00:17:31.047 "adrfam": "IPv4", 00:17:31.047 "traddr": "10.0.0.2", 00:17:31.047 "trsvcid": "4420" 00:17:31.047 }, 00:17:31.047 "peer_address": { 00:17:31.047 "trtype": "TCP", 00:17:31.047 "adrfam": "IPv4", 00:17:31.047 "traddr": "10.0.0.1", 00:17:31.047 "trsvcid": "58204" 00:17:31.047 }, 00:17:31.047 "auth": { 00:17:31.047 "state": "completed", 00:17:31.047 "digest": "sha512", 00:17:31.047 "dhgroup": "ffdhe6144" 00:17:31.047 } 00:17:31.047 } 00:17:31.047 ]' 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.047 15:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.307 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:17:31.307 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.249 15:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.249 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.249 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.249 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.249 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.511 00:17:32.771 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.771 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.771 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.771 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.771 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.771 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.771 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.771 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.771 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.771 { 00:17:32.771 "cntlid": 133, 00:17:32.771 "qid": 0, 00:17:32.771 "state": "enabled", 00:17:32.771 "thread": "nvmf_tgt_poll_group_000", 00:17:32.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:32.771 "listen_address": { 00:17:32.771 "trtype": "TCP", 00:17:32.771 "adrfam": "IPv4", 00:17:32.771 "traddr": "10.0.0.2", 00:17:32.771 "trsvcid": "4420" 00:17:32.771 }, 00:17:32.771 "peer_address": { 00:17:32.771 "trtype": "TCP", 00:17:32.771 "adrfam": "IPv4", 00:17:32.771 "traddr": "10.0.0.1", 00:17:32.771 "trsvcid": "58220" 00:17:32.771 }, 00:17:32.771 "auth": { 00:17:32.771 "state": "completed", 00:17:32.771 "digest": "sha512", 00:17:32.771 "dhgroup": "ffdhe6144" 00:17:32.771 } 00:17:32.771 } 00:17:32.771 ]' 00:17:32.771 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.771 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.771 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.032 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.032 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.032 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.032 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.032 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.032 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:17:33.032 15:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:17:33.973 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.973 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:33.973 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.973 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.973 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.973 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.973 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.973 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:34.233 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:34.233 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.233 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.233 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:34.233 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.233 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.233 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:17:34.233 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.233 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.233 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.233 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.233 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.233 15:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.494 00:17:34.494 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.494 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.494 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.754 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.754 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.754 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.754 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.754 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.754 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.754 { 00:17:34.754 "cntlid": 135, 00:17:34.754 "qid": 0, 00:17:34.754 "state": "enabled", 00:17:34.754 "thread": "nvmf_tgt_poll_group_000", 00:17:34.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:34.754 "listen_address": { 00:17:34.754 "trtype": "TCP", 00:17:34.754 "adrfam": "IPv4", 00:17:34.754 "traddr": "10.0.0.2", 00:17:34.754 "trsvcid": "4420" 00:17:34.754 }, 00:17:34.754 "peer_address": { 00:17:34.754 "trtype": "TCP", 00:17:34.754 "adrfam": "IPv4", 00:17:34.754 "traddr": "10.0.0.1", 00:17:34.754 "trsvcid": "58238" 00:17:34.754 }, 00:17:34.754 "auth": { 00:17:34.754 "state": "completed", 00:17:34.754 "digest": "sha512", 00:17:34.754 "dhgroup": "ffdhe6144" 00:17:34.755 } 00:17:34.755 } 00:17:34.755 ]' 00:17:34.755 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.755 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.755 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.755 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.755 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.755 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.755 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.755 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.014 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:35.015 15:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:35.584 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.844 15:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.415 00:17:36.415 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.415 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.415 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.675 { 00:17:36.675 "cntlid": 137, 00:17:36.675 "qid": 0, 00:17:36.675 "state": "enabled", 00:17:36.675 "thread": "nvmf_tgt_poll_group_000", 00:17:36.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:36.675 "listen_address": { 00:17:36.675 "trtype": "TCP", 00:17:36.675 "adrfam": "IPv4", 00:17:36.675 "traddr": "10.0.0.2", 00:17:36.675 "trsvcid": "4420" 00:17:36.675 }, 00:17:36.675 "peer_address": { 00:17:36.675 "trtype": "TCP", 00:17:36.675 "adrfam": "IPv4", 00:17:36.675 "traddr": "10.0.0.1", 00:17:36.675 "trsvcid": "58266" 00:17:36.675 }, 00:17:36.675 "auth": { 00:17:36.675 "state": "completed", 00:17:36.675 "digest": "sha512", 00:17:36.675 "dhgroup": "ffdhe8192" 00:17:36.675 } 00:17:36.675 } 00:17:36.675 ]' 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.675 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.935 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:36.935 15:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.877 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.878 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.878 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.878 15:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.447 00:17:38.447 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.447 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.447 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.708 { 00:17:38.708 "cntlid": 139, 00:17:38.708 "qid": 0, 00:17:38.708 "state": "enabled", 00:17:38.708 "thread": "nvmf_tgt_poll_group_000", 00:17:38.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:38.708 "listen_address": { 00:17:38.708 "trtype": "TCP", 00:17:38.708 "adrfam": "IPv4", 00:17:38.708 "traddr": "10.0.0.2", 00:17:38.708 "trsvcid": "4420" 00:17:38.708 }, 00:17:38.708 "peer_address": { 00:17:38.708 "trtype": "TCP", 00:17:38.708 "adrfam": "IPv4", 00:17:38.708 "traddr": "10.0.0.1", 00:17:38.708 "trsvcid": "58298" 00:17:38.708 }, 00:17:38.708 "auth": { 00:17:38.708 "state": "completed", 00:17:38.708 "digest": "sha512", 00:17:38.708 "dhgroup": "ffdhe8192" 00:17:38.708 } 00:17:38.708 } 00:17:38.708 ]' 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.708 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.967 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:17:38.968 15:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: --dhchap-ctrl-secret DHHC-1:02:MGQ4NjM2NTZlMmVmODM0YWRhNDRmNzVjYzY4MzhkNWJlNjkxNTUzYjdiZTAyYTEwqUc57Q==: 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.908 15:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.479 00:17:40.479 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.479 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.479 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.739 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.739 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.739 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.739 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.739 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.739 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.739 { 00:17:40.739 "cntlid": 141, 00:17:40.739 "qid": 0, 00:17:40.739 "state": "enabled", 00:17:40.740 "thread": "nvmf_tgt_poll_group_000", 00:17:40.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:40.740 "listen_address": { 00:17:40.740 "trtype": "TCP", 00:17:40.740 "adrfam": "IPv4", 00:17:40.740 "traddr": "10.0.0.2", 00:17:40.740 "trsvcid": "4420" 00:17:40.740 }, 00:17:40.740 "peer_address": { 00:17:40.740 "trtype": "TCP", 00:17:40.740 "adrfam": "IPv4", 00:17:40.740 "traddr": "10.0.0.1", 00:17:40.740 "trsvcid": "44548" 00:17:40.740 }, 00:17:40.740 "auth": { 00:17:40.740 "state": "completed", 00:17:40.740 "digest": "sha512", 00:17:40.740 "dhgroup": "ffdhe8192" 00:17:40.740 } 00:17:40.740 } 00:17:40.740 ]' 00:17:40.740 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.740 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.740 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.740 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.740 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.740 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.740 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.740 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.000 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:17:41.000 15:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:01:MDg2NDZkYmE2Nzg0NTgxNzllMDYyMDAwM2VhYjgxNGYYzadY: 00:17:41.570 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.831 15:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.399 00:17:42.399 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.399 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.399 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.658 { 00:17:42.658 "cntlid": 143, 00:17:42.658 "qid": 0, 00:17:42.658 "state": "enabled", 00:17:42.658 "thread": "nvmf_tgt_poll_group_000", 00:17:42.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:42.658 "listen_address": { 00:17:42.658 "trtype": "TCP", 00:17:42.658 "adrfam": "IPv4", 00:17:42.658 "traddr": "10.0.0.2", 00:17:42.658 "trsvcid": "4420" 00:17:42.658 }, 00:17:42.658 "peer_address": { 00:17:42.658 "trtype": "TCP", 00:17:42.658 "adrfam": "IPv4", 00:17:42.658 "traddr": "10.0.0.1", 00:17:42.658 "trsvcid": "44592" 00:17:42.658 }, 00:17:42.658 "auth": { 00:17:42.658 "state": "completed", 00:17:42.658 "digest": "sha512", 00:17:42.658 "dhgroup": "ffdhe8192" 00:17:42.658 } 00:17:42.658 } 00:17:42.658 ]' 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.658 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.917 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:42.917 15:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.857 15:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.426 00:17:44.426 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.426 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.426 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.685 { 00:17:44.685 "cntlid": 145, 00:17:44.685 "qid": 0, 00:17:44.685 "state": "enabled", 00:17:44.685 "thread": "nvmf_tgt_poll_group_000", 00:17:44.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:44.685 "listen_address": { 00:17:44.685 "trtype": "TCP", 00:17:44.685 "adrfam": "IPv4", 00:17:44.685 "traddr": "10.0.0.2", 00:17:44.685 "trsvcid": "4420" 00:17:44.685 }, 00:17:44.685 "peer_address": { 00:17:44.685 "trtype": "TCP", 00:17:44.685 "adrfam": "IPv4", 00:17:44.685 "traddr": "10.0.0.1", 00:17:44.685 "trsvcid": "44606" 00:17:44.685 }, 00:17:44.685 "auth": { 00:17:44.685 "state": "completed", 00:17:44.685 "digest": "sha512", 00:17:44.685 "dhgroup": "ffdhe8192" 00:17:44.685 } 00:17:44.685 } 00:17:44.685 ]' 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.685 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.945 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:44.945 15:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:00:ODBmMTcxMjA1ODc4OTcyZGI0YWJkZDk2ZDIzMDc0ZjVhMGNkNTA1YzgxNTE0ZWIzz6wBsg==: --dhchap-ctrl-secret DHHC-1:03:MTVkYWVjMzFlYjU4MTkwNzNlMTZhZWYwZGYzNTIzNDJiMWQyOGIxMmQ1ZjYzYzE1NDNiYjZkMTIyYmEyM2JkNTKThAE=: 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:45.885 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:46.146 request: 00:17:46.146 { 00:17:46.146 "name": "nvme0", 00:17:46.146 "trtype": "tcp", 00:17:46.146 "traddr": "10.0.0.2", 00:17:46.146 "adrfam": "ipv4", 00:17:46.146 "trsvcid": "4420", 00:17:46.146 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:46.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:46.146 "prchk_reftag": false, 00:17:46.146 "prchk_guard": false, 00:17:46.146 "hdgst": false, 00:17:46.146 "ddgst": false, 00:17:46.146 "dhchap_key": "key2", 00:17:46.146 "allow_unrecognized_csi": false, 00:17:46.146 "method": "bdev_nvme_attach_controller", 00:17:46.146 "req_id": 1 00:17:46.146 } 00:17:46.146 Got JSON-RPC error response 00:17:46.146 response: 00:17:46.146 { 00:17:46.146 "code": -5, 00:17:46.146 "message": "Input/output error" 00:17:46.146 } 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:46.146 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:46.147 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:46.147 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:46.147 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.147 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:46.147 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.147 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:46.147 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:46.147 15:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:46.718 request: 00:17:46.718 { 00:17:46.718 "name": "nvme0", 00:17:46.718 "trtype": "tcp", 00:17:46.718 "traddr": "10.0.0.2", 00:17:46.718 "adrfam": "ipv4", 00:17:46.718 "trsvcid": "4420", 00:17:46.718 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:46.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:46.718 "prchk_reftag": false, 00:17:46.718 "prchk_guard": false, 00:17:46.718 "hdgst": false, 00:17:46.718 "ddgst": false, 00:17:46.718 "dhchap_key": "key1", 00:17:46.718 "dhchap_ctrlr_key": "ckey2", 00:17:46.718 "allow_unrecognized_csi": false, 00:17:46.718 "method": "bdev_nvme_attach_controller", 00:17:46.718 "req_id": 1 00:17:46.718 } 00:17:46.718 Got JSON-RPC error response 00:17:46.718 response: 00:17:46.718 { 00:17:46.718 "code": -5, 00:17:46.718 "message": "Input/output error" 00:17:46.718 } 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.718 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.291 request: 00:17:47.291 { 00:17:47.291 "name": "nvme0", 00:17:47.291 "trtype": "tcp", 00:17:47.291 "traddr": "10.0.0.2", 00:17:47.291 "adrfam": "ipv4", 00:17:47.291 "trsvcid": "4420", 00:17:47.291 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:47.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:47.291 "prchk_reftag": false, 00:17:47.291 "prchk_guard": false, 00:17:47.291 "hdgst": false, 00:17:47.291 "ddgst": false, 00:17:47.291 "dhchap_key": "key1", 00:17:47.291 "dhchap_ctrlr_key": "ckey1", 00:17:47.291 "allow_unrecognized_csi": false, 00:17:47.291 "method": "bdev_nvme_attach_controller", 00:17:47.291 "req_id": 1 00:17:47.291 } 00:17:47.291 Got JSON-RPC error response 00:17:47.291 response: 00:17:47.291 { 00:17:47.291 "code": -5, 00:17:47.291 "message": "Input/output error" 00:17:47.291 } 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3934177 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3934177 ']' 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3934177 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:47.291 15:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3934177 00:17:47.291 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:47.291 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:47.291 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3934177' 00:17:47.291 killing process with pid 3934177 00:17:47.291 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3934177 00:17:47.291 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3934177 00:17:47.552 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:47.552 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:47.552 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:47.552 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.552 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=3961392 00:17:47.552 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 3961392 00:17:47.552 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:47.552 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3961392 ']' 00:17:47.552 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.552 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.553 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.553 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.553 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.553 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:47.553 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:47.553 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3961392 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3961392 ']' 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.814 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.075 null0 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1rI 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ZMY ]] 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZMY 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uEu 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.hEX ]] 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hEX 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oUj 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.5n0 ]] 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5n0 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gDG 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.075 15:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.017 nvme0n1 00:17:49.017 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.017 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.017 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.017 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.017 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.017 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.017 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.017 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.017 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.017 { 00:17:49.017 "cntlid": 1, 00:17:49.017 "qid": 0, 00:17:49.017 "state": "enabled", 00:17:49.017 "thread": "nvmf_tgt_poll_group_000", 00:17:49.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:49.017 "listen_address": { 00:17:49.017 "trtype": "TCP", 00:17:49.017 "adrfam": "IPv4", 00:17:49.017 "traddr": "10.0.0.2", 00:17:49.017 "trsvcid": "4420" 00:17:49.017 }, 00:17:49.017 "peer_address": { 00:17:49.017 "trtype": "TCP", 00:17:49.017 "adrfam": "IPv4", 00:17:49.017 "traddr": "10.0.0.1", 00:17:49.017 "trsvcid": "44644" 00:17:49.017 }, 00:17:49.017 "auth": { 00:17:49.017 "state": "completed", 00:17:49.017 "digest": "sha512", 00:17:49.017 "dhgroup": "ffdhe8192" 00:17:49.017 } 00:17:49.017 } 00:17:49.017 ]' 00:17:49.017 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.278 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.278 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.278 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:49.278 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.278 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.278 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.278 15:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.538 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:49.538 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:50.110 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.110 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:50.110 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.110 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.110 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.110 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:17:50.110 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.110 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.371 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.371 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:50.371 15:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:50.371 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:50.371 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:50.371 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:50.371 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:50.371 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.371 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:50.371 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.371 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:50.371 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.371 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.633 request: 00:17:50.633 { 00:17:50.633 "name": "nvme0", 00:17:50.633 "trtype": "tcp", 00:17:50.633 "traddr": "10.0.0.2", 00:17:50.633 "adrfam": "ipv4", 00:17:50.633 "trsvcid": "4420", 00:17:50.633 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:50.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:50.633 "prchk_reftag": false, 00:17:50.633 "prchk_guard": false, 00:17:50.633 "hdgst": false, 00:17:50.633 "ddgst": false, 00:17:50.633 "dhchap_key": "key3", 00:17:50.633 "allow_unrecognized_csi": false, 00:17:50.633 "method": "bdev_nvme_attach_controller", 00:17:50.633 "req_id": 1 00:17:50.633 } 00:17:50.633 Got JSON-RPC error response 00:17:50.633 response: 00:17:50.633 { 00:17:50.633 "code": -5, 00:17:50.633 "message": "Input/output error" 00:17:50.633 } 00:17:50.633 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:50.633 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.633 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.633 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.633 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:50.633 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:50.633 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:50.633 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:50.894 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:50.894 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:50.894 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:50.894 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:50.894 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.894 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:50.894 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.894 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:50.894 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.894 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.894 request: 00:17:50.894 { 00:17:50.894 "name": "nvme0", 00:17:50.894 "trtype": "tcp", 00:17:50.894 "traddr": "10.0.0.2", 00:17:50.894 "adrfam": "ipv4", 00:17:50.894 "trsvcid": "4420", 00:17:50.894 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:50.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:50.894 "prchk_reftag": false, 00:17:50.894 "prchk_guard": false, 00:17:50.894 "hdgst": false, 00:17:50.894 "ddgst": false, 00:17:50.894 "dhchap_key": "key3", 00:17:50.894 "allow_unrecognized_csi": false, 00:17:50.894 "method": "bdev_nvme_attach_controller", 00:17:50.894 "req_id": 1 00:17:50.894 } 00:17:50.895 Got JSON-RPC error response 00:17:50.895 response: 00:17:50.895 { 00:17:50.895 "code": -5, 00:17:50.895 "message": "Input/output error" 00:17:50.895 } 00:17:50.895 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:50.895 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.895 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.895 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.895 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:50.895 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:50.895 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:50.895 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:50.895 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:50.895 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:51.156 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:51.156 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.156 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.156 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.156 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:51.157 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.157 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.157 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.157 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.157 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:51.157 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.157 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:51.157 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.157 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:51.157 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.157 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.157 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.157 15:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.417 request: 00:17:51.417 { 00:17:51.417 "name": "nvme0", 00:17:51.417 "trtype": "tcp", 00:17:51.417 "traddr": "10.0.0.2", 00:17:51.417 "adrfam": "ipv4", 00:17:51.417 "trsvcid": "4420", 00:17:51.417 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:51.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:51.417 "prchk_reftag": false, 00:17:51.417 "prchk_guard": false, 00:17:51.417 "hdgst": false, 00:17:51.417 "ddgst": false, 00:17:51.417 "dhchap_key": "key0", 00:17:51.417 "dhchap_ctrlr_key": "key1", 00:17:51.417 "allow_unrecognized_csi": false, 00:17:51.417 "method": "bdev_nvme_attach_controller", 00:17:51.417 "req_id": 1 00:17:51.417 } 00:17:51.417 Got JSON-RPC error response 00:17:51.417 response: 00:17:51.417 { 00:17:51.417 "code": -5, 00:17:51.417 "message": "Input/output error" 00:17:51.417 } 00:17:51.417 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:51.417 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:51.417 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:51.417 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:51.417 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:51.417 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:51.417 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:51.679 nvme0n1 00:17:51.679 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:51.679 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.679 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:51.939 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.939 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.939 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.201 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:17:52.201 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.201 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.201 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.201 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:52.201 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:52.201 15:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:53.143 nvme0n1 00:17:53.143 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:53.144 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:53.144 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.144 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.144 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.144 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.144 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.144 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.144 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:53.144 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:53.144 15:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.405 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.405 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:53.405 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 -l 0 --dhchap-secret DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: --dhchap-ctrl-secret DHHC-1:03:MjJjM2I2MDA5NDQ2YjlmZDQwYzVkZGIwNGI2ZTE3MzYxZGQ3OWIzN2Y3OGIyMGYyNDA3NmM4NjhjODg5NDEyNEYoQaU=: 00:17:53.976 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:53.976 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:53.976 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:53.976 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:53.976 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:53.976 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:53.976 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:53.976 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.976 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.237 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:54.237 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:54.237 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:54.237 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:54.237 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:54.237 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:54.237 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:54.237 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:54.237 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:54.237 15:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:54.809 request: 00:17:54.809 { 00:17:54.809 "name": "nvme0", 00:17:54.809 "trtype": "tcp", 00:17:54.809 "traddr": "10.0.0.2", 00:17:54.809 "adrfam": "ipv4", 00:17:54.809 "trsvcid": "4420", 00:17:54.809 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:54.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:17:54.809 "prchk_reftag": false, 00:17:54.809 "prchk_guard": false, 00:17:54.809 "hdgst": false, 00:17:54.809 "ddgst": false, 00:17:54.809 "dhchap_key": "key1", 00:17:54.809 "allow_unrecognized_csi": false, 00:17:54.809 "method": "bdev_nvme_attach_controller", 00:17:54.809 "req_id": 1 00:17:54.809 } 00:17:54.809 Got JSON-RPC error response 00:17:54.809 response: 00:17:54.809 { 00:17:54.809 "code": -5, 00:17:54.809 "message": "Input/output error" 00:17:54.809 } 00:17:54.809 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:54.809 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:54.809 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:54.809 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:54.809 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.809 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.809 15:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:55.750 nvme0n1 00:17:55.750 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:55.750 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:55.750 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.750 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.750 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.750 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.010 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:56.010 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.010 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.010 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.010 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:56.010 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:56.010 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:56.270 nvme0n1 00:17:56.270 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:56.270 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:56.270 15:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.270 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.270 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.270 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: '' 2s 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: ]] 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTE3OGRlZDNjMGEyNjA2ZWUwNzc3YWZlNDFmYTEzZmMVsxpC: 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:56.530 15:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: 2s 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: ]] 00:17:58.494 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWVhNTgwMWQzMTQ1M2E3NzVhYWUwYjYxNjdhMWUzMmQ4YzRkY2M3MWE3MDZkNTg3cchw0g==: 00:17:58.754 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:58.754 15:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:00.665 15:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:01.607 nvme0n1 00:18:01.607 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:01.607 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.607 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.607 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.607 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:01.607 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:02.247 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:02.247 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:02.247 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.247 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.248 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:02.248 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.248 15:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.248 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.248 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:02.248 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:02.561 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.149 request: 00:18:03.149 { 00:18:03.149 "name": "nvme0", 00:18:03.149 "dhchap_key": "key1", 00:18:03.149 "dhchap_ctrlr_key": "key3", 00:18:03.149 "method": "bdev_nvme_set_keys", 00:18:03.149 "req_id": 1 00:18:03.149 } 00:18:03.149 Got JSON-RPC error response 00:18:03.149 response: 00:18:03.149 { 00:18:03.149 "code": -13, 00:18:03.149 "message": "Permission denied" 00:18:03.149 } 00:18:03.149 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:03.149 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:03.149 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:03.149 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:03.149 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:03.149 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:03.149 15:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.409 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:03.409 15:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:04.350 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:04.350 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:04.350 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.612 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:04.612 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:04.612 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.612 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.612 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.612 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:04.612 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:04.612 15:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:05.556 nvme0n1 00:18:05.556 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:05.556 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.556 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.556 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.556 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:05.556 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:05.556 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:05.556 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:05.556 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.556 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:05.556 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.556 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:05.556 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:05.817 request: 00:18:05.817 { 00:18:05.817 "name": "nvme0", 00:18:05.817 "dhchap_key": "key2", 00:18:05.817 "dhchap_ctrlr_key": "key0", 00:18:05.817 "method": "bdev_nvme_set_keys", 00:18:05.817 "req_id": 1 00:18:05.817 } 00:18:05.817 Got JSON-RPC error response 00:18:05.817 response: 00:18:05.817 { 00:18:05.817 "code": -13, 00:18:05.817 "message": "Permission denied" 00:18:05.817 } 00:18:05.817 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:05.817 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:05.817 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:05.817 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:05.817 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:05.817 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:05.817 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.078 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:06.078 15:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:07.020 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:07.020 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:07.020 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.281 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:07.281 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:07.281 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:07.281 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3934369 00:18:07.281 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3934369 ']' 00:18:07.281 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3934369 00:18:07.281 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:07.281 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.281 15:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3934369 00:18:07.281 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:07.281 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:07.281 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3934369' 00:18:07.281 killing process with pid 3934369 00:18:07.281 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3934369 00:18:07.281 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3934369 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:07.542 rmmod nvme_tcp 00:18:07.542 rmmod nvme_fabrics 00:18:07.542 rmmod nvme_keyring 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 3961392 ']' 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 3961392 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3961392 ']' 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3961392 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3961392 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3961392' 00:18:07.542 killing process with pid 3961392 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3961392 00:18:07.542 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3961392 00:18:07.803 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:07.803 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:07.803 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:07.803 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:07.803 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:18:07.803 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:07.803 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:18:07.803 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:07.803 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:07.803 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.803 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.803 15:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.719 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:09.719 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1rI /tmp/spdk.key-sha256.uEu /tmp/spdk.key-sha384.oUj /tmp/spdk.key-sha512.gDG /tmp/spdk.key-sha512.ZMY /tmp/spdk.key-sha384.hEX /tmp/spdk.key-sha256.5n0 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:09.980 00:18:09.980 real 2m44.612s 00:18:09.980 user 6m7.737s 00:18:09.980 sys 0m23.977s 00:18:09.980 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:09.980 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.980 ************************************ 00:18:09.980 END TEST nvmf_auth_target 00:18:09.980 ************************************ 00:18:09.980 15:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:09.980 15:15:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:09.980 15:15:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:09.980 15:15:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:09.980 15:15:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:09.980 ************************************ 00:18:09.980 START TEST nvmf_bdevio_no_huge 00:18:09.980 ************************************ 00:18:09.980 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:09.980 * Looking for test storage... 00:18:09.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.980 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:09.980 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:18:09.980 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:10.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.242 --rc genhtml_branch_coverage=1 00:18:10.242 --rc genhtml_function_coverage=1 00:18:10.242 --rc genhtml_legend=1 00:18:10.242 --rc geninfo_all_blocks=1 00:18:10.242 --rc geninfo_unexecuted_blocks=1 00:18:10.242 00:18:10.242 ' 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:10.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.242 --rc genhtml_branch_coverage=1 00:18:10.242 --rc genhtml_function_coverage=1 00:18:10.242 --rc genhtml_legend=1 00:18:10.242 --rc geninfo_all_blocks=1 00:18:10.242 --rc geninfo_unexecuted_blocks=1 00:18:10.242 00:18:10.242 ' 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:10.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.242 --rc genhtml_branch_coverage=1 00:18:10.242 --rc genhtml_function_coverage=1 00:18:10.242 --rc genhtml_legend=1 00:18:10.242 --rc geninfo_all_blocks=1 00:18:10.242 --rc geninfo_unexecuted_blocks=1 00:18:10.242 00:18:10.242 ' 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:10.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.242 --rc genhtml_branch_coverage=1 00:18:10.242 --rc genhtml_function_coverage=1 00:18:10.242 --rc genhtml_legend=1 00:18:10.242 --rc geninfo_all_blocks=1 00:18:10.242 --rc geninfo_unexecuted_blocks=1 00:18:10.242 00:18:10.242 ' 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.242 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:10.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:10.243 15:15:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:18.386 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:18.386 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:18.387 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:18.387 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:18.387 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # is_hw=yes 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:18.387 15:15:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:18.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:18:18.387 00:18:18.387 --- 10.0.0.2 ping statistics --- 00:18:18.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.387 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:18.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:18:18.387 00:18:18.387 --- 10.0.0.1 ping statistics --- 00:18:18.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.387 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # return 0 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=3970224 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 3970224 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3970224 ']' 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.387 15:15:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.387 [2024-10-01 15:15:27.384318] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:18:18.387 [2024-10-01 15:15:27.384399] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:18.387 [2024-10-01 15:15:27.479855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:18.387 [2024-10-01 15:15:27.588642] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.387 [2024-10-01 15:15:27.588696] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.387 [2024-10-01 15:15:27.588705] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.387 [2024-10-01 15:15:27.588712] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.387 [2024-10-01 15:15:27.588718] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.387 [2024-10-01 15:15:27.588879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:18:18.387 [2024-10-01 15:15:27.589043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:18:18.387 [2024-10-01 15:15:27.589245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:18:18.387 [2024-10-01 15:15:27.589246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:18.387 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.387 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:18.387 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:18.387 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:18.387 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.648 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.648 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.649 [2024-10-01 15:15:28.254606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.649 Malloc0 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.649 [2024-10-01 15:15:28.308622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:18:18.649 { 00:18:18.649 "params": { 00:18:18.649 "name": "Nvme$subsystem", 00:18:18.649 "trtype": "$TEST_TRANSPORT", 00:18:18.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.649 "adrfam": "ipv4", 00:18:18.649 "trsvcid": "$NVMF_PORT", 00:18:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.649 "hdgst": ${hdgst:-false}, 00:18:18.649 "ddgst": ${ddgst:-false} 00:18:18.649 }, 00:18:18.649 "method": "bdev_nvme_attach_controller" 00:18:18.649 } 00:18:18.649 EOF 00:18:18.649 )") 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:18:18.649 15:15:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:18:18.649 "params": { 00:18:18.649 "name": "Nvme1", 00:18:18.649 "trtype": "tcp", 00:18:18.649 "traddr": "10.0.0.2", 00:18:18.649 "adrfam": "ipv4", 00:18:18.649 "trsvcid": "4420", 00:18:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.649 "hdgst": false, 00:18:18.649 "ddgst": false 00:18:18.649 }, 00:18:18.649 "method": "bdev_nvme_attach_controller" 00:18:18.649 }' 00:18:18.649 [2024-10-01 15:15:28.370161] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:18:18.649 [2024-10-01 15:15:28.370260] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3970308 ] 00:18:18.649 [2024-10-01 15:15:28.447737] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:18.909 [2024-10-01 15:15:28.546238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.909 [2024-10-01 15:15:28.546413] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.909 [2024-10-01 15:15:28.546417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.909 I/O targets: 00:18:18.909 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:18.909 00:18:18.909 00:18:18.909 CUnit - A unit testing framework for C - Version 2.1-3 00:18:18.909 http://cunit.sourceforge.net/ 00:18:18.909 00:18:18.909 00:18:18.909 Suite: bdevio tests on: Nvme1n1 00:18:18.909 Test: blockdev write read block ...passed 00:18:19.169 Test: blockdev write zeroes read block ...passed 00:18:19.169 Test: blockdev write zeroes read no split ...passed 00:18:19.169 Test: blockdev write zeroes read split ...passed 00:18:19.169 Test: blockdev write zeroes read split partial ...passed 00:18:19.169 Test: blockdev reset ...[2024-10-01 15:15:28.897329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:19.169 [2024-10-01 15:15:28.897387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7c8d0 (9): Bad file descriptor 00:18:19.169 [2024-10-01 15:15:28.967863] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:19.169 passed 00:18:19.169 Test: blockdev write read 8 blocks ...passed 00:18:19.169 Test: blockdev write read size > 128k ...passed 00:18:19.169 Test: blockdev write read invalid size ...passed 00:18:19.169 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:19.169 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:19.169 Test: blockdev write read max offset ...passed 00:18:19.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:19.429 Test: blockdev writev readv 8 blocks ...passed 00:18:19.430 Test: blockdev writev readv 30 x 1block ...passed 00:18:19.430 Test: blockdev writev readv block ...passed 00:18:19.430 Test: blockdev writev readv size > 128k ...passed 00:18:19.430 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:19.430 Test: blockdev comparev and writev ...[2024-10-01 15:15:29.151570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.430 [2024-10-01 15:15:29.151595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:19.430 [2024-10-01 15:15:29.151607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.430 [2024-10-01 15:15:29.151613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:19.430 [2024-10-01 15:15:29.152093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.430 [2024-10-01 15:15:29.152102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:19.430 [2024-10-01 15:15:29.152111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.430 [2024-10-01 15:15:29.152117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:19.430 [2024-10-01 15:15:29.152605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.430 [2024-10-01 15:15:29.152613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:19.430 [2024-10-01 15:15:29.152623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.430 [2024-10-01 15:15:29.152628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:19.430 [2024-10-01 15:15:29.153136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.430 [2024-10-01 15:15:29.153144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:19.430 [2024-10-01 15:15:29.153153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:19.430 [2024-10-01 15:15:29.153163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:19.430 passed 00:18:19.430 Test: blockdev nvme passthru rw ...passed 00:18:19.430 Test: blockdev nvme passthru vendor specific ...[2024-10-01 15:15:29.237781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.430 [2024-10-01 15:15:29.237791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:19.430 [2024-10-01 15:15:29.238114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.430 [2024-10-01 15:15:29.238122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:19.430 [2024-10-01 15:15:29.238460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.430 [2024-10-01 15:15:29.238467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:19.430 [2024-10-01 15:15:29.238802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.430 [2024-10-01 15:15:29.238809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:19.430 passed 00:18:19.430 Test: blockdev nvme admin passthru ...passed 00:18:19.691 Test: blockdev copy ...passed 00:18:19.691 00:18:19.691 Run Summary: Type Total Ran Passed Failed Inactive 00:18:19.691 suites 1 1 n/a 0 0 00:18:19.691 tests 23 23 23 0 0 00:18:19.691 asserts 152 152 152 0 n/a 00:18:19.691 00:18:19.691 Elapsed time = 1.177 seconds 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:19.951 rmmod nvme_tcp 00:18:19.951 rmmod nvme_fabrics 00:18:19.951 rmmod nvme_keyring 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 3970224 ']' 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 3970224 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3970224 ']' 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3970224 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:19.951 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3970224 00:18:19.952 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:19.952 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:19.952 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3970224' 00:18:19.952 killing process with pid 3970224 00:18:19.952 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3970224 00:18:19.952 15:15:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3970224 00:18:20.213 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:20.213 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:20.213 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:20.213 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:20.213 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:18:20.213 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:20.213 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:18:20.213 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:20.213 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:20.213 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.213 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.213 15:15:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:22.761 00:18:22.761 real 0m12.464s 00:18:22.761 user 0m13.606s 00:18:22.761 sys 0m6.758s 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:22.761 ************************************ 00:18:22.761 END TEST nvmf_bdevio_no_huge 00:18:22.761 ************************************ 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:22.761 ************************************ 00:18:22.761 START TEST nvmf_tls 00:18:22.761 ************************************ 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:22.761 * Looking for test storage... 00:18:22.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:22.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.761 --rc genhtml_branch_coverage=1 00:18:22.761 --rc genhtml_function_coverage=1 00:18:22.761 --rc genhtml_legend=1 00:18:22.761 --rc geninfo_all_blocks=1 00:18:22.761 --rc geninfo_unexecuted_blocks=1 00:18:22.761 00:18:22.761 ' 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:22.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.761 --rc genhtml_branch_coverage=1 00:18:22.761 --rc genhtml_function_coverage=1 00:18:22.761 --rc genhtml_legend=1 00:18:22.761 --rc geninfo_all_blocks=1 00:18:22.761 --rc geninfo_unexecuted_blocks=1 00:18:22.761 00:18:22.761 ' 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:22.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.761 --rc genhtml_branch_coverage=1 00:18:22.761 --rc genhtml_function_coverage=1 00:18:22.761 --rc genhtml_legend=1 00:18:22.761 --rc geninfo_all_blocks=1 00:18:22.761 --rc geninfo_unexecuted_blocks=1 00:18:22.761 00:18:22.761 ' 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:22.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.761 --rc genhtml_branch_coverage=1 00:18:22.761 --rc genhtml_function_coverage=1 00:18:22.761 --rc genhtml_legend=1 00:18:22.761 --rc geninfo_all_blocks=1 00:18:22.761 --rc geninfo_unexecuted_blocks=1 00:18:22.761 00:18:22.761 ' 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.761 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:22.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:22.762 15:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:30.900 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:30.900 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:30.901 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:30.901 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ up == up ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:30.901 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # is_hw=yes 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:30.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:18:30.901 00:18:30.901 --- 10.0.0.2 ping statistics --- 00:18:30.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.901 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:30.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:18:30.901 00:18:30.901 --- 10.0.0.1 ping statistics --- 00:18:30.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.901 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # return 0 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3974964 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3974964 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3974964 ']' 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.901 15:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.901 [2024-10-01 15:15:39.992936] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:18:30.901 [2024-10-01 15:15:39.993011] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.901 [2024-10-01 15:15:40.086786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.901 [2024-10-01 15:15:40.179595] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.901 [2024-10-01 15:15:40.179654] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.901 [2024-10-01 15:15:40.179663] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.901 [2024-10-01 15:15:40.179670] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.901 [2024-10-01 15:15:40.179676] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.901 [2024-10-01 15:15:40.179702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.161 15:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:31.161 15:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:31.161 15:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:31.161 15:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:31.161 15:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.161 15:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.161 15:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:31.161 15:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:31.421 true 00:18:31.421 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:31.421 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:31.421 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:31.421 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:31.421 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:31.682 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:31.682 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:31.942 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:31.942 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:31.942 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:31.942 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:31.942 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:32.203 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:32.203 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:32.203 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:32.203 15:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:32.462 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:32.462 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:32.463 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:32.723 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:32.723 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:32.723 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:32.723 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:32.723 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:32.982 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:32.982 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.LIobnjpGSd 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.l7P9atyKUb 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.LIobnjpGSd 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.l7P9atyKUb 00:18:33.242 15:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:33.502 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:33.762 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.LIobnjpGSd 00:18:33.762 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.LIobnjpGSd 00:18:33.762 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:33.762 [2024-10-01 15:15:43.550679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.762 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:34.022 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:34.022 [2024-10-01 15:15:43.871460] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.022 [2024-10-01 15:15:43.871649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.282 15:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:34.282 malloc0 00:18:34.282 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:34.542 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.LIobnjpGSd 00:18:34.542 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:34.803 15:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.LIobnjpGSd 00:18:44.801 Initializing NVMe Controllers 00:18:44.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:44.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:44.801 Initialization complete. Launching workers. 00:18:44.801 ======================================================== 00:18:44.801 Latency(us) 00:18:44.801 Device Information : IOPS MiB/s Average min max 00:18:44.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18694.84 73.03 3423.41 1229.16 4225.14 00:18:44.801 ======================================================== 00:18:44.801 Total : 18694.84 73.03 3423.41 1229.16 4225.14 00:18:44.801 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LIobnjpGSd 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LIobnjpGSd 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3977715 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3977715 /var/tmp/bdevperf.sock 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3977715 ']' 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.801 15:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.062 [2024-10-01 15:15:54.693392] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:18:45.062 [2024-10-01 15:15:54.693448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3977715 ] 00:18:45.062 [2024-10-01 15:15:54.743473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.062 [2024-10-01 15:15:54.796477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.634 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.634 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:45.634 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LIobnjpGSd 00:18:45.895 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:46.156 [2024-10-01 15:15:55.800226] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.156 TLSTESTn1 00:18:46.156 15:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:46.156 Running I/O for 10 seconds... 00:18:56.447 5669.00 IOPS, 22.14 MiB/s 6085.00 IOPS, 23.77 MiB/s 6173.67 IOPS, 24.12 MiB/s 6165.00 IOPS, 24.08 MiB/s 6079.60 IOPS, 23.75 MiB/s 6129.83 IOPS, 23.94 MiB/s 6122.57 IOPS, 23.92 MiB/s 6018.38 IOPS, 23.51 MiB/s 6024.78 IOPS, 23.53 MiB/s 6027.00 IOPS, 23.54 MiB/s 00:18:56.447 Latency(us) 00:18:56.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.447 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:56.447 Verification LBA range: start 0x0 length 0x2000 00:18:56.447 TLSTESTn1 : 10.01 6031.55 23.56 0.00 0.00 21191.77 5925.55 35607.89 00:18:56.447 =================================================================================================================== 00:18:56.447 Total : 6031.55 23.56 0.00 0.00 21191.77 5925.55 35607.89 00:18:56.447 { 00:18:56.447 "results": [ 00:18:56.447 { 00:18:56.447 "job": "TLSTESTn1", 00:18:56.447 "core_mask": "0x4", 00:18:56.447 "workload": "verify", 00:18:56.447 "status": "finished", 00:18:56.447 "verify_range": { 00:18:56.447 "start": 0, 00:18:56.447 "length": 8192 00:18:56.447 }, 00:18:56.447 "queue_depth": 128, 00:18:56.447 "io_size": 4096, 00:18:56.447 "runtime": 10.013506, 00:18:56.447 "iops": 6031.553783460059, 00:18:56.447 "mibps": 23.560756966640856, 00:18:56.447 "io_failed": 0, 00:18:56.447 "io_timeout": 0, 00:18:56.447 "avg_latency_us": 21191.77299534745, 00:18:56.447 "min_latency_us": 5925.546666666667, 00:18:56.447 "max_latency_us": 35607.89333333333 00:18:56.447 } 00:18:56.447 ], 00:18:56.447 "core_count": 1 00:18:56.447 } 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3977715 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3977715 ']' 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3977715 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3977715 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3977715' 00:18:56.447 killing process with pid 3977715 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3977715 00:18:56.447 Received shutdown signal, test time was about 10.000000 seconds 00:18:56.447 00:18:56.447 Latency(us) 00:18:56.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.447 =================================================================================================================== 00:18:56.447 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3977715 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l7P9atyKUb 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l7P9atyKUb 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.l7P9atyKUb 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.l7P9atyKUb 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3980050 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3980050 /var/tmp/bdevperf.sock 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3980050 ']' 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.447 15:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.447 [2024-10-01 15:16:06.300207] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:18:56.447 [2024-10-01 15:16:06.300272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980050 ] 00:18:56.708 [2024-10-01 15:16:06.351571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.708 [2024-10-01 15:16:06.403392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.278 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.278 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:57.278 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.l7P9atyKUb 00:18:57.537 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:57.798 [2024-10-01 15:16:07.419182] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:57.798 [2024-10-01 15:16:07.430599] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:57.798 [2024-10-01 15:16:07.431301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95dc60 (107): Transport endpoint is not connected 00:18:57.798 [2024-10-01 15:16:07.432297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95dc60 (9): Bad file descriptor 00:18:57.798 [2024-10-01 15:16:07.433299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:57.798 [2024-10-01 15:16:07.433310] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:57.798 [2024-10-01 15:16:07.433315] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:57.798 [2024-10-01 15:16:07.433323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:57.798 request: 00:18:57.798 { 00:18:57.798 "name": "TLSTEST", 00:18:57.798 "trtype": "tcp", 00:18:57.798 "traddr": "10.0.0.2", 00:18:57.798 "adrfam": "ipv4", 00:18:57.798 "trsvcid": "4420", 00:18:57.798 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:57.798 "prchk_reftag": false, 00:18:57.798 "prchk_guard": false, 00:18:57.798 "hdgst": false, 00:18:57.798 "ddgst": false, 00:18:57.798 "psk": "key0", 00:18:57.798 "allow_unrecognized_csi": false, 00:18:57.798 "method": "bdev_nvme_attach_controller", 00:18:57.798 "req_id": 1 00:18:57.798 } 00:18:57.798 Got JSON-RPC error response 00:18:57.798 response: 00:18:57.798 { 00:18:57.798 "code": -5, 00:18:57.798 "message": "Input/output error" 00:18:57.798 } 00:18:57.798 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3980050 00:18:57.798 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3980050 ']' 00:18:57.798 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3980050 00:18:57.798 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:57.798 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:57.798 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3980050 00:18:57.798 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:57.798 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:57.798 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3980050' 00:18:57.798 killing process with pid 3980050 00:18:57.798 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3980050 00:18:57.798 Received shutdown signal, test time was about 10.000000 seconds 00:18:57.798 00:18:57.798 Latency(us) 00:18:57.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.798 =================================================================================================================== 00:18:57.798 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:57.798 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3980050 00:18:57.798 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LIobnjpGSd 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LIobnjpGSd 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LIobnjpGSd 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LIobnjpGSd 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3980367 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3980367 /var/tmp/bdevperf.sock 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3980367 ']' 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:57.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:57.799 15:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.059 [2024-10-01 15:16:07.693172] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:18:58.059 [2024-10-01 15:16:07.693229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980367 ] 00:18:58.059 [2024-10-01 15:16:07.743288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.059 [2024-10-01 15:16:07.795641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.630 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:58.630 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:58.630 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LIobnjpGSd 00:18:58.891 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:59.151 [2024-10-01 15:16:08.807271] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.151 [2024-10-01 15:16:08.813866] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:59.151 [2024-10-01 15:16:08.813884] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:59.151 [2024-10-01 15:16:08.813904] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:59.151 [2024-10-01 15:16:08.814304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2024c60 (107): Transport endpoint is not connected 00:18:59.151 [2024-10-01 15:16:08.815300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2024c60 (9): Bad file descriptor 00:18:59.151 [2024-10-01 15:16:08.816302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:59.151 [2024-10-01 15:16:08.816310] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:59.151 [2024-10-01 15:16:08.816315] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:59.151 [2024-10-01 15:16:08.816324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:59.151 request: 00:18:59.151 { 00:18:59.151 "name": "TLSTEST", 00:18:59.151 "trtype": "tcp", 00:18:59.151 "traddr": "10.0.0.2", 00:18:59.151 "adrfam": "ipv4", 00:18:59.151 "trsvcid": "4420", 00:18:59.151 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.151 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:59.151 "prchk_reftag": false, 00:18:59.151 "prchk_guard": false, 00:18:59.151 "hdgst": false, 00:18:59.151 "ddgst": false, 00:18:59.151 "psk": "key0", 00:18:59.151 "allow_unrecognized_csi": false, 00:18:59.151 "method": "bdev_nvme_attach_controller", 00:18:59.151 "req_id": 1 00:18:59.151 } 00:18:59.151 Got JSON-RPC error response 00:18:59.151 response: 00:18:59.151 { 00:18:59.151 "code": -5, 00:18:59.151 "message": "Input/output error" 00:18:59.151 } 00:18:59.151 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3980367 00:18:59.151 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3980367 ']' 00:18:59.151 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3980367 00:18:59.151 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:59.151 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.151 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3980367 00:18:59.151 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:59.151 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:59.151 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3980367' 00:18:59.151 killing process with pid 3980367 00:18:59.152 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3980367 00:18:59.152 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.152 00:18:59.152 Latency(us) 00:18:59.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.152 =================================================================================================================== 00:18:59.152 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:59.152 15:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3980367 00:18:59.417 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LIobnjpGSd 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LIobnjpGSd 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LIobnjpGSd 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LIobnjpGSd 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3980549 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3980549 /var/tmp/bdevperf.sock 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3980549 ']' 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.418 [2024-10-01 15:16:09.078543] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:18:59.418 [2024-10-01 15:16:09.078597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980549 ] 00:18:59.418 [2024-10-01 15:16:09.128913] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.418 [2024-10-01 15:16:09.181294] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:59.418 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LIobnjpGSd 00:18:59.677 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:59.942 [2024-10-01 15:16:09.567386] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.942 [2024-10-01 15:16:09.575717] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:59.942 [2024-10-01 15:16:09.575735] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:59.942 [2024-10-01 15:16:09.575755] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:59.942 [2024-10-01 15:16:09.576671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c5c60 (107): Transport endpoint is not connected 00:18:59.942 [2024-10-01 15:16:09.577666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c5c60 (9): Bad file descriptor 00:18:59.942 [2024-10-01 15:16:09.578668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:59.942 [2024-10-01 15:16:09.578675] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:59.942 [2024-10-01 15:16:09.578681] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:59.942 [2024-10-01 15:16:09.578689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:59.942 request: 00:18:59.942 { 00:18:59.942 "name": "TLSTEST", 00:18:59.942 "trtype": "tcp", 00:18:59.942 "traddr": "10.0.0.2", 00:18:59.942 "adrfam": "ipv4", 00:18:59.942 "trsvcid": "4420", 00:18:59.942 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:59.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.942 "prchk_reftag": false, 00:18:59.942 "prchk_guard": false, 00:18:59.942 "hdgst": false, 00:18:59.942 "ddgst": false, 00:18:59.942 "psk": "key0", 00:18:59.942 "allow_unrecognized_csi": false, 00:18:59.942 "method": "bdev_nvme_attach_controller", 00:18:59.942 "req_id": 1 00:18:59.942 } 00:18:59.942 Got JSON-RPC error response 00:18:59.942 response: 00:18:59.942 { 00:18:59.942 "code": -5, 00:18:59.942 "message": "Input/output error" 00:18:59.942 } 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3980549 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3980549 ']' 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3980549 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3980549 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3980549' 00:18:59.942 killing process with pid 3980549 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3980549 00:18:59.942 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.942 00:18:59.942 Latency(us) 00:18:59.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.942 =================================================================================================================== 00:18:59.942 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3980549 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3980758 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3980758 /var/tmp/bdevperf.sock 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3980758 ']' 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.942 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.943 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.943 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.943 15:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.203 [2024-10-01 15:16:09.827595] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:00.203 [2024-10-01 15:16:09.827650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3980758 ] 00:19:00.203 [2024-10-01 15:16:09.877786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.203 [2024-10-01 15:16:09.930317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.773 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.773 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:00.773 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:01.049 [2024-10-01 15:16:10.761748] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:01.049 [2024-10-01 15:16:10.761772] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:01.049 request: 00:19:01.049 { 00:19:01.049 "name": "key0", 00:19:01.049 "path": "", 00:19:01.049 "method": "keyring_file_add_key", 00:19:01.049 "req_id": 1 00:19:01.049 } 00:19:01.049 Got JSON-RPC error response 00:19:01.049 response: 00:19:01.049 { 00:19:01.049 "code": -1, 00:19:01.049 "message": "Operation not permitted" 00:19:01.049 } 00:19:01.049 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:01.363 [2024-10-01 15:16:10.930251] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.363 [2024-10-01 15:16:10.930273] bdev_nvme.c:6389:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:01.363 request: 00:19:01.363 { 00:19:01.363 "name": "TLSTEST", 00:19:01.363 "trtype": "tcp", 00:19:01.363 "traddr": "10.0.0.2", 00:19:01.363 "adrfam": "ipv4", 00:19:01.363 "trsvcid": "4420", 00:19:01.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.363 "prchk_reftag": false, 00:19:01.363 "prchk_guard": false, 00:19:01.363 "hdgst": false, 00:19:01.363 "ddgst": false, 00:19:01.363 "psk": "key0", 00:19:01.363 "allow_unrecognized_csi": false, 00:19:01.363 "method": "bdev_nvme_attach_controller", 00:19:01.363 "req_id": 1 00:19:01.363 } 00:19:01.363 Got JSON-RPC error response 00:19:01.363 response: 00:19:01.363 { 00:19:01.363 "code": -126, 00:19:01.363 "message": "Required key not available" 00:19:01.363 } 00:19:01.363 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3980758 00:19:01.363 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3980758 ']' 00:19:01.363 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3980758 00:19:01.363 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:01.363 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:01.363 15:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3980758 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3980758' 00:19:01.363 killing process with pid 3980758 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3980758 00:19:01.363 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.363 00:19:01.363 Latency(us) 00:19:01.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.363 =================================================================================================================== 00:19:01.363 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3980758 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3974964 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3974964 ']' 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3974964 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3974964 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3974964' 00:19:01.363 killing process with pid 3974964 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3974964 00:19:01.363 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3974964 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.756kRTrJb0 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.756kRTrJb0 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3981107 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3981107 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3981107 ']' 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.658 15:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.658 [2024-10-01 15:16:11.433603] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:01.658 [2024-10-01 15:16:11.433703] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.922 [2024-10-01 15:16:11.521321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.922 [2024-10-01 15:16:11.578628] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.922 [2024-10-01 15:16:11.578661] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.922 [2024-10-01 15:16:11.578666] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.922 [2024-10-01 15:16:11.578671] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.922 [2024-10-01 15:16:11.578675] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.922 [2024-10-01 15:16:11.578690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.491 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.491 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:02.491 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:02.491 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:02.491 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.491 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.491 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.756kRTrJb0 00:19:02.491 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.756kRTrJb0 00:19:02.491 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:02.750 [2024-10-01 15:16:12.411064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.750 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:02.750 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:03.009 [2024-10-01 15:16:12.715808] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.009 [2024-10-01 15:16:12.715979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.009 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:03.268 malloc0 00:19:03.268 15:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:03.268 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.756kRTrJb0 00:19:03.529 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.756kRTrJb0 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.756kRTrJb0 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3981477 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3981477 /var/tmp/bdevperf.sock 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3981477 ']' 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:03.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:03.789 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.789 [2024-10-01 15:16:13.470870] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:03.789 [2024-10-01 15:16:13.470925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3981477 ] 00:19:03.789 [2024-10-01 15:16:13.520896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.789 [2024-10-01 15:16:13.573587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.049 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:04.049 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:04.049 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.756kRTrJb0 00:19:04.049 15:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:04.309 [2024-10-01 15:16:13.979721] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.309 TLSTESTn1 00:19:04.309 15:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:04.309 Running I/O for 10 seconds... 00:19:14.624 5810.00 IOPS, 22.70 MiB/s 6090.00 IOPS, 23.79 MiB/s 5946.00 IOPS, 23.23 MiB/s 6063.75 IOPS, 23.69 MiB/s 5782.20 IOPS, 22.59 MiB/s 5747.33 IOPS, 22.45 MiB/s 5773.29 IOPS, 22.55 MiB/s 5871.62 IOPS, 22.94 MiB/s 5795.67 IOPS, 22.64 MiB/s 5680.30 IOPS, 22.19 MiB/s 00:19:14.624 Latency(us) 00:19:14.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.624 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.624 Verification LBA range: start 0x0 length 0x2000 00:19:14.624 TLSTESTn1 : 10.02 5679.64 22.19 0.00 0.00 22498.89 4587.52 25886.72 00:19:14.624 =================================================================================================================== 00:19:14.624 Total : 5679.64 22.19 0.00 0.00 22498.89 4587.52 25886.72 00:19:14.624 { 00:19:14.624 "results": [ 00:19:14.624 { 00:19:14.624 "job": "TLSTESTn1", 00:19:14.624 "core_mask": "0x4", 00:19:14.624 "workload": "verify", 00:19:14.624 "status": "finished", 00:19:14.624 "verify_range": { 00:19:14.624 "start": 0, 00:19:14.624 "length": 8192 00:19:14.624 }, 00:19:14.624 "queue_depth": 128, 00:19:14.624 "io_size": 4096, 00:19:14.624 "runtime": 10.023353, 00:19:14.624 "iops": 5679.636345242954, 00:19:14.624 "mibps": 22.18607947360529, 00:19:14.624 "io_failed": 0, 00:19:14.624 "io_timeout": 0, 00:19:14.624 "avg_latency_us": 22498.893162594344, 00:19:14.624 "min_latency_us": 4587.52, 00:19:14.624 "max_latency_us": 25886.72 00:19:14.624 } 00:19:14.624 ], 00:19:14.624 "core_count": 1 00:19:14.624 } 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3981477 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3981477 ']' 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3981477 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3981477 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3981477' 00:19:14.624 killing process with pid 3981477 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3981477 00:19:14.624 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.624 00:19:14.624 Latency(us) 00:19:14.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.624 =================================================================================================================== 00:19:14.624 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3981477 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.756kRTrJb0 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.756kRTrJb0 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.756kRTrJb0 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.756kRTrJb0 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.624 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:14.625 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.756kRTrJb0 00:19:14.625 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.625 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3983515 00:19:14.625 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.625 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3983515 /var/tmp/bdevperf.sock 00:19:14.625 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.625 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3983515 ']' 00:19:14.625 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.625 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.625 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.625 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.625 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.625 [2024-10-01 15:16:24.473722] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:14.625 [2024-10-01 15:16:24.473782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3983515 ] 00:19:14.886 [2024-10-01 15:16:24.525497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.886 [2024-10-01 15:16:24.578350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.886 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:14.886 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:14.886 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.756kRTrJb0 00:19:15.147 [2024-10-01 15:16:24.812244] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.756kRTrJb0': 0100666 00:19:15.147 [2024-10-01 15:16:24.812270] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:15.147 request: 00:19:15.147 { 00:19:15.147 "name": "key0", 00:19:15.147 "path": "/tmp/tmp.756kRTrJb0", 00:19:15.147 "method": "keyring_file_add_key", 00:19:15.147 "req_id": 1 00:19:15.147 } 00:19:15.147 Got JSON-RPC error response 00:19:15.147 response: 00:19:15.147 { 00:19:15.147 "code": -1, 00:19:15.147 "message": "Operation not permitted" 00:19:15.147 } 00:19:15.147 15:16:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:15.147 [2024-10-01 15:16:24.996775] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.147 [2024-10-01 15:16:24.996796] bdev_nvme.c:6389:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:15.147 request: 00:19:15.147 { 00:19:15.147 "name": "TLSTEST", 00:19:15.147 "trtype": "tcp", 00:19:15.147 "traddr": "10.0.0.2", 00:19:15.147 "adrfam": "ipv4", 00:19:15.147 "trsvcid": "4420", 00:19:15.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.147 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.147 "prchk_reftag": false, 00:19:15.147 "prchk_guard": false, 00:19:15.147 "hdgst": false, 00:19:15.147 "ddgst": false, 00:19:15.147 "psk": "key0", 00:19:15.147 "allow_unrecognized_csi": false, 00:19:15.147 "method": "bdev_nvme_attach_controller", 00:19:15.147 "req_id": 1 00:19:15.147 } 00:19:15.147 Got JSON-RPC error response 00:19:15.147 response: 00:19:15.147 { 00:19:15.147 "code": -126, 00:19:15.147 "message": "Required key not available" 00:19:15.147 } 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3983515 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3983515 ']' 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3983515 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3983515 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3983515' 00:19:15.408 killing process with pid 3983515 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3983515 00:19:15.408 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.408 00:19:15.408 Latency(us) 00:19:15.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.408 =================================================================================================================== 00:19:15.408 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3983515 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3981107 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3981107 ']' 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3981107 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.408 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3981107 00:19:15.668 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:15.668 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:15.668 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3981107' 00:19:15.668 killing process with pid 3981107 00:19:15.668 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3981107 00:19:15.668 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3981107 00:19:15.668 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:15.669 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:15.669 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:15.669 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.669 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3983838 00:19:15.669 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:15.669 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3983838 00:19:15.669 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3983838 ']' 00:19:15.669 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.669 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.669 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.669 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.669 15:16:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.669 [2024-10-01 15:16:25.451470] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:15.669 [2024-10-01 15:16:25.451525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.929 [2024-10-01 15:16:25.535261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.929 [2024-10-01 15:16:25.587683] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.929 [2024-10-01 15:16:25.587717] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.929 [2024-10-01 15:16:25.587723] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.929 [2024-10-01 15:16:25.587728] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.929 [2024-10-01 15:16:25.587733] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.929 [2024-10-01 15:16:25.587753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.756kRTrJb0 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.756kRTrJb0 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.756kRTrJb0 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.756kRTrJb0 00:19:16.500 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:16.762 [2024-10-01 15:16:26.420157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.762 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:16.762 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:17.023 [2024-10-01 15:16:26.756982] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:17.023 [2024-10-01 15:16:26.757168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.023 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:17.284 malloc0 00:19:17.284 15:16:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:17.284 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.756kRTrJb0 00:19:17.544 [2024-10-01 15:16:27.241782] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.756kRTrJb0': 0100666 00:19:17.544 [2024-10-01 15:16:27.241803] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:17.544 request: 00:19:17.544 { 00:19:17.544 "name": "key0", 00:19:17.544 "path": "/tmp/tmp.756kRTrJb0", 00:19:17.544 "method": "keyring_file_add_key", 00:19:17.544 "req_id": 1 00:19:17.544 } 00:19:17.544 Got JSON-RPC error response 00:19:17.544 response: 00:19:17.544 { 00:19:17.544 "code": -1, 00:19:17.544 "message": "Operation not permitted" 00:19:17.544 } 00:19:17.544 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:17.544 [2024-10-01 15:16:27.398177] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:17.544 [2024-10-01 15:16:27.398201] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:17.544 request: 00:19:17.544 { 00:19:17.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.544 "host": "nqn.2016-06.io.spdk:host1", 00:19:17.544 "psk": "key0", 00:19:17.544 "method": "nvmf_subsystem_add_host", 00:19:17.544 "req_id": 1 00:19:17.544 } 00:19:17.544 Got JSON-RPC error response 00:19:17.544 response: 00:19:17.544 { 00:19:17.544 "code": -32603, 00:19:17.544 "message": "Internal error" 00:19:17.544 } 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3983838 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3983838 ']' 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3983838 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3983838 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3983838' 00:19:17.806 killing process with pid 3983838 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3983838 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3983838 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.756kRTrJb0 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3984211 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3984211 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3984211 ']' 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:17.806 15:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.806 [2024-10-01 15:16:27.643877] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:17.806 [2024-10-01 15:16:27.643972] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.068 [2024-10-01 15:16:27.739907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.068 [2024-10-01 15:16:27.793373] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.068 [2024-10-01 15:16:27.793404] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.068 [2024-10-01 15:16:27.793410] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.068 [2024-10-01 15:16:27.793415] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.068 [2024-10-01 15:16:27.793419] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.068 [2024-10-01 15:16:27.793434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.640 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.640 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:18.640 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:18.640 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.640 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.640 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.640 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.756kRTrJb0 00:19:18.640 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.756kRTrJb0 00:19:18.640 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:18.901 [2024-10-01 15:16:28.641918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.901 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:19.162 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:19.162 [2024-10-01 15:16:28.962700] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:19.162 [2024-10-01 15:16:28.962870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.162 15:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:19.422 malloc0 00:19:19.422 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:19.682 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.756kRTrJb0 00:19:19.682 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:19.943 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.943 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3984601 00:19:19.943 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.943 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3984601 /var/tmp/bdevperf.sock 00:19:19.943 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3984601 ']' 00:19:19.943 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.943 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.943 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.943 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.943 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.943 [2024-10-01 15:16:29.646173] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:19.943 [2024-10-01 15:16:29.646227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3984601 ] 00:19:19.943 [2024-10-01 15:16:29.697142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.943 [2024-10-01 15:16:29.749498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.203 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.203 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:20.203 15:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.756kRTrJb0 00:19:20.203 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.464 [2024-10-01 15:16:30.188064] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.464 TLSTESTn1 00:19:20.464 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:20.724 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:20.724 "subsystems": [ 00:19:20.724 { 00:19:20.724 "subsystem": "keyring", 00:19:20.724 "config": [ 00:19:20.724 { 00:19:20.724 "method": "keyring_file_add_key", 00:19:20.724 "params": { 00:19:20.724 "name": "key0", 00:19:20.724 "path": "/tmp/tmp.756kRTrJb0" 00:19:20.724 } 00:19:20.724 } 00:19:20.724 ] 00:19:20.724 }, 00:19:20.724 { 00:19:20.724 "subsystem": "iobuf", 00:19:20.724 "config": [ 00:19:20.724 { 00:19:20.724 "method": "iobuf_set_options", 00:19:20.724 "params": { 00:19:20.724 "small_pool_count": 8192, 00:19:20.724 "large_pool_count": 1024, 00:19:20.724 "small_bufsize": 8192, 00:19:20.724 "large_bufsize": 135168 00:19:20.724 } 00:19:20.724 } 00:19:20.724 ] 00:19:20.724 }, 00:19:20.724 { 00:19:20.724 "subsystem": "sock", 00:19:20.724 "config": [ 00:19:20.724 { 00:19:20.724 "method": "sock_set_default_impl", 00:19:20.724 "params": { 00:19:20.724 "impl_name": "posix" 00:19:20.724 } 00:19:20.724 }, 00:19:20.724 { 00:19:20.724 "method": "sock_impl_set_options", 00:19:20.724 "params": { 00:19:20.724 "impl_name": "ssl", 00:19:20.724 "recv_buf_size": 4096, 00:19:20.724 "send_buf_size": 4096, 00:19:20.724 "enable_recv_pipe": true, 00:19:20.724 "enable_quickack": false, 00:19:20.724 "enable_placement_id": 0, 00:19:20.724 "enable_zerocopy_send_server": true, 00:19:20.724 "enable_zerocopy_send_client": false, 00:19:20.724 "zerocopy_threshold": 0, 00:19:20.724 "tls_version": 0, 00:19:20.724 "enable_ktls": false 00:19:20.724 } 00:19:20.724 }, 00:19:20.724 { 00:19:20.724 "method": "sock_impl_set_options", 00:19:20.724 "params": { 00:19:20.724 "impl_name": "posix", 00:19:20.724 "recv_buf_size": 2097152, 00:19:20.724 "send_buf_size": 2097152, 00:19:20.724 "enable_recv_pipe": true, 00:19:20.724 "enable_quickack": false, 00:19:20.724 "enable_placement_id": 0, 00:19:20.724 "enable_zerocopy_send_server": true, 00:19:20.724 "enable_zerocopy_send_client": false, 00:19:20.724 "zerocopy_threshold": 0, 00:19:20.724 "tls_version": 0, 00:19:20.724 "enable_ktls": false 00:19:20.724 } 00:19:20.724 } 00:19:20.724 ] 00:19:20.724 }, 00:19:20.724 { 00:19:20.724 "subsystem": "vmd", 00:19:20.724 "config": [] 00:19:20.724 }, 00:19:20.724 { 00:19:20.724 "subsystem": "accel", 00:19:20.724 "config": [ 00:19:20.724 { 00:19:20.724 "method": "accel_set_options", 00:19:20.724 "params": { 00:19:20.724 "small_cache_size": 128, 00:19:20.724 "large_cache_size": 16, 00:19:20.724 "task_count": 2048, 00:19:20.724 "sequence_count": 2048, 00:19:20.724 "buf_count": 2048 00:19:20.724 } 00:19:20.724 } 00:19:20.724 ] 00:19:20.724 }, 00:19:20.724 { 00:19:20.724 "subsystem": "bdev", 00:19:20.724 "config": [ 00:19:20.724 { 00:19:20.724 "method": "bdev_set_options", 00:19:20.724 "params": { 00:19:20.724 "bdev_io_pool_size": 65535, 00:19:20.724 "bdev_io_cache_size": 256, 00:19:20.724 "bdev_auto_examine": true, 00:19:20.724 "iobuf_small_cache_size": 128, 00:19:20.724 "iobuf_large_cache_size": 16 00:19:20.724 } 00:19:20.724 }, 00:19:20.724 { 00:19:20.724 "method": "bdev_raid_set_options", 00:19:20.724 "params": { 00:19:20.724 "process_window_size_kb": 1024, 00:19:20.724 "process_max_bandwidth_mb_sec": 0 00:19:20.724 } 00:19:20.724 }, 00:19:20.724 { 00:19:20.724 "method": "bdev_iscsi_set_options", 00:19:20.724 "params": { 00:19:20.724 "timeout_sec": 30 00:19:20.724 } 00:19:20.724 }, 00:19:20.724 { 00:19:20.724 "method": "bdev_nvme_set_options", 00:19:20.724 "params": { 00:19:20.724 "action_on_timeout": "none", 00:19:20.724 "timeout_us": 0, 00:19:20.724 "timeout_admin_us": 0, 00:19:20.724 "keep_alive_timeout_ms": 10000, 00:19:20.724 "arbitration_burst": 0, 00:19:20.724 "low_priority_weight": 0, 00:19:20.724 "medium_priority_weight": 0, 00:19:20.724 "high_priority_weight": 0, 00:19:20.724 "nvme_adminq_poll_period_us": 10000, 00:19:20.724 "nvme_ioq_poll_period_us": 0, 00:19:20.724 "io_queue_requests": 0, 00:19:20.724 "delay_cmd_submit": true, 00:19:20.724 "transport_retry_count": 4, 00:19:20.724 "bdev_retry_count": 3, 00:19:20.724 "transport_ack_timeout": 0, 00:19:20.724 "ctrlr_loss_timeout_sec": 0, 00:19:20.724 "reconnect_delay_sec": 0, 00:19:20.724 "fast_io_fail_timeout_sec": 0, 00:19:20.724 "disable_auto_failback": false, 00:19:20.724 "generate_uuids": false, 00:19:20.724 "transport_tos": 0, 00:19:20.724 "nvme_error_stat": false, 00:19:20.724 "rdma_srq_size": 0, 00:19:20.724 "io_path_stat": false, 00:19:20.724 "allow_accel_sequence": false, 00:19:20.724 "rdma_max_cq_size": 0, 00:19:20.724 "rdma_cm_event_timeout_ms": 0, 00:19:20.724 "dhchap_digests": [ 00:19:20.724 "sha256", 00:19:20.724 "sha384", 00:19:20.724 "sha512" 00:19:20.725 ], 00:19:20.725 "dhchap_dhgroups": [ 00:19:20.725 "null", 00:19:20.725 "ffdhe2048", 00:19:20.725 "ffdhe3072", 00:19:20.725 "ffdhe4096", 00:19:20.725 "ffdhe6144", 00:19:20.725 "ffdhe8192" 00:19:20.725 ] 00:19:20.725 } 00:19:20.725 }, 00:19:20.725 { 00:19:20.725 "method": "bdev_nvme_set_hotplug", 00:19:20.725 "params": { 00:19:20.725 "period_us": 100000, 00:19:20.725 "enable": false 00:19:20.725 } 00:19:20.725 }, 00:19:20.725 { 00:19:20.725 "method": "bdev_malloc_create", 00:19:20.725 "params": { 00:19:20.725 "name": "malloc0", 00:19:20.725 "num_blocks": 8192, 00:19:20.725 "block_size": 4096, 00:19:20.725 "physical_block_size": 4096, 00:19:20.725 "uuid": "e508cec9-7c4d-4546-9b83-99dc03477151", 00:19:20.725 "optimal_io_boundary": 0, 00:19:20.725 "md_size": 0, 00:19:20.725 "dif_type": 0, 00:19:20.725 "dif_is_head_of_md": false, 00:19:20.725 "dif_pi_format": 0 00:19:20.725 } 00:19:20.725 }, 00:19:20.725 { 00:19:20.725 "method": "bdev_wait_for_examine" 00:19:20.725 } 00:19:20.725 ] 00:19:20.725 }, 00:19:20.725 { 00:19:20.725 "subsystem": "nbd", 00:19:20.725 "config": [] 00:19:20.725 }, 00:19:20.725 { 00:19:20.725 "subsystem": "scheduler", 00:19:20.725 "config": [ 00:19:20.725 { 00:19:20.725 "method": "framework_set_scheduler", 00:19:20.725 "params": { 00:19:20.725 "name": "static" 00:19:20.725 } 00:19:20.725 } 00:19:20.725 ] 00:19:20.725 }, 00:19:20.725 { 00:19:20.725 "subsystem": "nvmf", 00:19:20.725 "config": [ 00:19:20.725 { 00:19:20.725 "method": "nvmf_set_config", 00:19:20.725 "params": { 00:19:20.725 "discovery_filter": "match_any", 00:19:20.725 "admin_cmd_passthru": { 00:19:20.725 "identify_ctrlr": false 00:19:20.725 }, 00:19:20.725 "dhchap_digests": [ 00:19:20.725 "sha256", 00:19:20.725 "sha384", 00:19:20.725 "sha512" 00:19:20.725 ], 00:19:20.725 "dhchap_dhgroups": [ 00:19:20.725 "null", 00:19:20.725 "ffdhe2048", 00:19:20.725 "ffdhe3072", 00:19:20.725 "ffdhe4096", 00:19:20.725 "ffdhe6144", 00:19:20.725 "ffdhe8192" 00:19:20.725 ] 00:19:20.725 } 00:19:20.725 }, 00:19:20.725 { 00:19:20.725 "method": "nvmf_set_max_subsystems", 00:19:20.725 "params": { 00:19:20.725 "max_subsystems": 1024 00:19:20.725 } 00:19:20.725 }, 00:19:20.725 { 00:19:20.725 "method": "nvmf_set_crdt", 00:19:20.725 "params": { 00:19:20.725 "crdt1": 0, 00:19:20.725 "crdt2": 0, 00:19:20.725 "crdt3": 0 00:19:20.725 } 00:19:20.725 }, 00:19:20.725 { 00:19:20.725 "method": "nvmf_create_transport", 00:19:20.725 "params": { 00:19:20.725 "trtype": "TCP", 00:19:20.725 "max_queue_depth": 128, 00:19:20.725 "max_io_qpairs_per_ctrlr": 127, 00:19:20.725 "in_capsule_data_size": 4096, 00:19:20.725 "max_io_size": 131072, 00:19:20.725 "io_unit_size": 131072, 00:19:20.725 "max_aq_depth": 128, 00:19:20.725 "num_shared_buffers": 511, 00:19:20.725 "buf_cache_size": 4294967295, 00:19:20.725 "dif_insert_or_strip": false, 00:19:20.725 "zcopy": false, 00:19:20.725 "c2h_success": false, 00:19:20.725 "sock_priority": 0, 00:19:20.725 "abort_timeout_sec": 1, 00:19:20.725 "ack_timeout": 0, 00:19:20.725 "data_wr_pool_size": 0 00:19:20.725 } 00:19:20.725 }, 00:19:20.725 { 00:19:20.725 "method": "nvmf_create_subsystem", 00:19:20.725 "params": { 00:19:20.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.725 "allow_any_host": false, 00:19:20.725 "serial_number": "SPDK00000000000001", 00:19:20.725 "model_number": "SPDK bdev Controller", 00:19:20.725 "max_namespaces": 10, 00:19:20.725 "min_cntlid": 1, 00:19:20.725 "max_cntlid": 65519, 00:19:20.725 "ana_reporting": false 00:19:20.725 } 00:19:20.725 }, 00:19:20.725 { 00:19:20.725 "method": "nvmf_subsystem_add_host", 00:19:20.725 "params": { 00:19:20.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.725 "host": "nqn.2016-06.io.spdk:host1", 00:19:20.725 "psk": "key0" 00:19:20.725 } 00:19:20.725 }, 00:19:20.725 { 00:19:20.725 "method": "nvmf_subsystem_add_ns", 00:19:20.725 "params": { 00:19:20.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.725 "namespace": { 00:19:20.725 "nsid": 1, 00:19:20.725 "bdev_name": "malloc0", 00:19:20.725 "nguid": "E508CEC97C4D45469B8399DC03477151", 00:19:20.725 "uuid": "e508cec9-7c4d-4546-9b83-99dc03477151", 00:19:20.725 "no_auto_visible": false 00:19:20.725 } 00:19:20.725 } 00:19:20.725 }, 00:19:20.725 { 00:19:20.725 "method": "nvmf_subsystem_add_listener", 00:19:20.725 "params": { 00:19:20.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.725 "listen_address": { 00:19:20.725 "trtype": "TCP", 00:19:20.725 "adrfam": "IPv4", 00:19:20.725 "traddr": "10.0.0.2", 00:19:20.725 "trsvcid": "4420" 00:19:20.725 }, 00:19:20.725 "secure_channel": true 00:19:20.725 } 00:19:20.725 } 00:19:20.725 ] 00:19:20.725 } 00:19:20.725 ] 00:19:20.725 }' 00:19:20.725 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:20.985 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:20.985 "subsystems": [ 00:19:20.985 { 00:19:20.985 "subsystem": "keyring", 00:19:20.985 "config": [ 00:19:20.985 { 00:19:20.985 "method": "keyring_file_add_key", 00:19:20.985 "params": { 00:19:20.985 "name": "key0", 00:19:20.985 "path": "/tmp/tmp.756kRTrJb0" 00:19:20.985 } 00:19:20.985 } 00:19:20.985 ] 00:19:20.985 }, 00:19:20.986 { 00:19:20.986 "subsystem": "iobuf", 00:19:20.986 "config": [ 00:19:20.986 { 00:19:20.986 "method": "iobuf_set_options", 00:19:20.986 "params": { 00:19:20.986 "small_pool_count": 8192, 00:19:20.986 "large_pool_count": 1024, 00:19:20.986 "small_bufsize": 8192, 00:19:20.986 "large_bufsize": 135168 00:19:20.986 } 00:19:20.986 } 00:19:20.986 ] 00:19:20.986 }, 00:19:20.986 { 00:19:20.986 "subsystem": "sock", 00:19:20.986 "config": [ 00:19:20.986 { 00:19:20.986 "method": "sock_set_default_impl", 00:19:20.986 "params": { 00:19:20.986 "impl_name": "posix" 00:19:20.986 } 00:19:20.986 }, 00:19:20.986 { 00:19:20.986 "method": "sock_impl_set_options", 00:19:20.986 "params": { 00:19:20.986 "impl_name": "ssl", 00:19:20.986 "recv_buf_size": 4096, 00:19:20.986 "send_buf_size": 4096, 00:19:20.986 "enable_recv_pipe": true, 00:19:20.986 "enable_quickack": false, 00:19:20.986 "enable_placement_id": 0, 00:19:20.986 "enable_zerocopy_send_server": true, 00:19:20.986 "enable_zerocopy_send_client": false, 00:19:20.986 "zerocopy_threshold": 0, 00:19:20.986 "tls_version": 0, 00:19:20.986 "enable_ktls": false 00:19:20.986 } 00:19:20.986 }, 00:19:20.986 { 00:19:20.986 "method": "sock_impl_set_options", 00:19:20.986 "params": { 00:19:20.986 "impl_name": "posix", 00:19:20.986 "recv_buf_size": 2097152, 00:19:20.986 "send_buf_size": 2097152, 00:19:20.986 "enable_recv_pipe": true, 00:19:20.986 "enable_quickack": false, 00:19:20.986 "enable_placement_id": 0, 00:19:20.986 "enable_zerocopy_send_server": true, 00:19:20.986 "enable_zerocopy_send_client": false, 00:19:20.986 "zerocopy_threshold": 0, 00:19:20.986 "tls_version": 0, 00:19:20.986 "enable_ktls": false 00:19:20.986 } 00:19:20.986 } 00:19:20.986 ] 00:19:20.986 }, 00:19:20.986 { 00:19:20.986 "subsystem": "vmd", 00:19:20.986 "config": [] 00:19:20.986 }, 00:19:20.986 { 00:19:20.986 "subsystem": "accel", 00:19:20.986 "config": [ 00:19:20.986 { 00:19:20.986 "method": "accel_set_options", 00:19:20.986 "params": { 00:19:20.986 "small_cache_size": 128, 00:19:20.986 "large_cache_size": 16, 00:19:20.986 "task_count": 2048, 00:19:20.986 "sequence_count": 2048, 00:19:20.986 "buf_count": 2048 00:19:20.986 } 00:19:20.986 } 00:19:20.986 ] 00:19:20.986 }, 00:19:20.986 { 00:19:20.986 "subsystem": "bdev", 00:19:20.986 "config": [ 00:19:20.986 { 00:19:20.986 "method": "bdev_set_options", 00:19:20.986 "params": { 00:19:20.986 "bdev_io_pool_size": 65535, 00:19:20.986 "bdev_io_cache_size": 256, 00:19:20.986 "bdev_auto_examine": true, 00:19:20.986 "iobuf_small_cache_size": 128, 00:19:20.986 "iobuf_large_cache_size": 16 00:19:20.986 } 00:19:20.986 }, 00:19:20.986 { 00:19:20.986 "method": "bdev_raid_set_options", 00:19:20.986 "params": { 00:19:20.986 "process_window_size_kb": 1024, 00:19:20.986 "process_max_bandwidth_mb_sec": 0 00:19:20.986 } 00:19:20.986 }, 00:19:20.986 { 00:19:20.986 "method": "bdev_iscsi_set_options", 00:19:20.986 "params": { 00:19:20.986 "timeout_sec": 30 00:19:20.986 } 00:19:20.986 }, 00:19:20.986 { 00:19:20.986 "method": "bdev_nvme_set_options", 00:19:20.986 "params": { 00:19:20.986 "action_on_timeout": "none", 00:19:20.986 "timeout_us": 0, 00:19:20.986 "timeout_admin_us": 0, 00:19:20.986 "keep_alive_timeout_ms": 10000, 00:19:20.986 "arbitration_burst": 0, 00:19:20.986 "low_priority_weight": 0, 00:19:20.986 "medium_priority_weight": 0, 00:19:20.986 "high_priority_weight": 0, 00:19:20.986 "nvme_adminq_poll_period_us": 10000, 00:19:20.986 "nvme_ioq_poll_period_us": 0, 00:19:20.986 "io_queue_requests": 512, 00:19:20.986 "delay_cmd_submit": true, 00:19:20.986 "transport_retry_count": 4, 00:19:20.986 "bdev_retry_count": 3, 00:19:20.986 "transport_ack_timeout": 0, 00:19:20.986 "ctrlr_loss_timeout_sec": 0, 00:19:20.986 "reconnect_delay_sec": 0, 00:19:20.986 "fast_io_fail_timeout_sec": 0, 00:19:20.986 "disable_auto_failback": false, 00:19:20.986 "generate_uuids": false, 00:19:20.986 "transport_tos": 0, 00:19:20.986 "nvme_error_stat": false, 00:19:20.986 "rdma_srq_size": 0, 00:19:20.986 "io_path_stat": false, 00:19:20.986 "allow_accel_sequence": false, 00:19:20.986 "rdma_max_cq_size": 0, 00:19:20.986 "rdma_cm_event_timeout_ms": 0, 00:19:20.986 "dhchap_digests": [ 00:19:20.986 "sha256", 00:19:20.986 "sha384", 00:19:20.986 "sha512" 00:19:20.986 ], 00:19:20.986 "dhchap_dhgroups": [ 00:19:20.986 "null", 00:19:20.986 "ffdhe2048", 00:19:20.986 "ffdhe3072", 00:19:20.986 "ffdhe4096", 00:19:20.986 "ffdhe6144", 00:19:20.986 "ffdhe8192" 00:19:20.986 ] 00:19:20.986 } 00:19:20.986 }, 00:19:20.986 { 00:19:20.986 "method": "bdev_nvme_attach_controller", 00:19:20.986 "params": { 00:19:20.986 "name": "TLSTEST", 00:19:20.986 "trtype": "TCP", 00:19:20.986 "adrfam": "IPv4", 00:19:20.986 "traddr": "10.0.0.2", 00:19:20.986 "trsvcid": "4420", 00:19:20.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.986 "prchk_reftag": false, 00:19:20.986 "prchk_guard": false, 00:19:20.986 "ctrlr_loss_timeout_sec": 0, 00:19:20.986 "reconnect_delay_sec": 0, 00:19:20.986 "fast_io_fail_timeout_sec": 0, 00:19:20.986 "psk": "key0", 00:19:20.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.986 "hdgst": false, 00:19:20.986 "ddgst": false 00:19:20.986 } 00:19:20.986 }, 00:19:20.986 { 00:19:20.986 "method": "bdev_nvme_set_hotplug", 00:19:20.986 "params": { 00:19:20.986 "period_us": 100000, 00:19:20.986 "enable": false 00:19:20.986 } 00:19:20.986 }, 00:19:20.986 { 00:19:20.986 "method": "bdev_wait_for_examine" 00:19:20.986 } 00:19:20.986 ] 00:19:20.986 }, 00:19:20.986 { 00:19:20.986 "subsystem": "nbd", 00:19:20.986 "config": [] 00:19:20.986 } 00:19:20.986 ] 00:19:20.986 }' 00:19:20.986 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3984601 00:19:20.986 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3984601 ']' 00:19:20.986 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3984601 00:19:20.986 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:20.986 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:20.986 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3984601 00:19:21.247 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:21.247 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:21.247 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3984601' 00:19:21.247 killing process with pid 3984601 00:19:21.247 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3984601 00:19:21.247 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.247 00:19:21.247 Latency(us) 00:19:21.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.247 =================================================================================================================== 00:19:21.247 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:21.247 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3984601 00:19:21.247 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3984211 00:19:21.247 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3984211 ']' 00:19:21.247 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3984211 00:19:21.247 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:21.247 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.247 15:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3984211 00:19:21.247 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:21.247 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:21.247 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3984211' 00:19:21.247 killing process with pid 3984211 00:19:21.247 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3984211 00:19:21.247 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3984211 00:19:21.523 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:21.523 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:21.523 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:21.523 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.523 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:21.523 "subsystems": [ 00:19:21.523 { 00:19:21.523 "subsystem": "keyring", 00:19:21.523 "config": [ 00:19:21.523 { 00:19:21.523 "method": "keyring_file_add_key", 00:19:21.523 "params": { 00:19:21.523 "name": "key0", 00:19:21.523 "path": "/tmp/tmp.756kRTrJb0" 00:19:21.523 } 00:19:21.523 } 00:19:21.523 ] 00:19:21.523 }, 00:19:21.523 { 00:19:21.523 "subsystem": "iobuf", 00:19:21.523 "config": [ 00:19:21.523 { 00:19:21.523 "method": "iobuf_set_options", 00:19:21.523 "params": { 00:19:21.523 "small_pool_count": 8192, 00:19:21.523 "large_pool_count": 1024, 00:19:21.523 "small_bufsize": 8192, 00:19:21.523 "large_bufsize": 135168 00:19:21.523 } 00:19:21.523 } 00:19:21.523 ] 00:19:21.523 }, 00:19:21.523 { 00:19:21.523 "subsystem": "sock", 00:19:21.523 "config": [ 00:19:21.523 { 00:19:21.523 "method": "sock_set_default_impl", 00:19:21.523 "params": { 00:19:21.523 "impl_name": "posix" 00:19:21.523 } 00:19:21.523 }, 00:19:21.523 { 00:19:21.523 "method": "sock_impl_set_options", 00:19:21.523 "params": { 00:19:21.523 "impl_name": "ssl", 00:19:21.523 "recv_buf_size": 4096, 00:19:21.523 "send_buf_size": 4096, 00:19:21.523 "enable_recv_pipe": true, 00:19:21.523 "enable_quickack": false, 00:19:21.523 "enable_placement_id": 0, 00:19:21.523 "enable_zerocopy_send_server": true, 00:19:21.523 "enable_zerocopy_send_client": false, 00:19:21.523 "zerocopy_threshold": 0, 00:19:21.523 "tls_version": 0, 00:19:21.523 "enable_ktls": false 00:19:21.523 } 00:19:21.523 }, 00:19:21.523 { 00:19:21.523 "method": "sock_impl_set_options", 00:19:21.523 "params": { 00:19:21.523 "impl_name": "posix", 00:19:21.523 "recv_buf_size": 2097152, 00:19:21.523 "send_buf_size": 2097152, 00:19:21.523 "enable_recv_pipe": true, 00:19:21.523 "enable_quickack": false, 00:19:21.523 "enable_placement_id": 0, 00:19:21.523 "enable_zerocopy_send_server": true, 00:19:21.523 "enable_zerocopy_send_client": false, 00:19:21.523 "zerocopy_threshold": 0, 00:19:21.523 "tls_version": 0, 00:19:21.523 "enable_ktls": false 00:19:21.523 } 00:19:21.523 } 00:19:21.523 ] 00:19:21.523 }, 00:19:21.523 { 00:19:21.523 "subsystem": "vmd", 00:19:21.523 "config": [] 00:19:21.523 }, 00:19:21.523 { 00:19:21.523 "subsystem": "accel", 00:19:21.523 "config": [ 00:19:21.523 { 00:19:21.523 "method": "accel_set_options", 00:19:21.523 "params": { 00:19:21.523 "small_cache_size": 128, 00:19:21.523 "large_cache_size": 16, 00:19:21.523 "task_count": 2048, 00:19:21.523 "sequence_count": 2048, 00:19:21.523 "buf_count": 2048 00:19:21.523 } 00:19:21.523 } 00:19:21.523 ] 00:19:21.523 }, 00:19:21.523 { 00:19:21.523 "subsystem": "bdev", 00:19:21.523 "config": [ 00:19:21.523 { 00:19:21.523 "method": "bdev_set_options", 00:19:21.523 "params": { 00:19:21.523 "bdev_io_pool_size": 65535, 00:19:21.523 "bdev_io_cache_size": 256, 00:19:21.523 "bdev_auto_examine": true, 00:19:21.523 "iobuf_small_cache_size": 128, 00:19:21.523 "iobuf_large_cache_size": 16 00:19:21.523 } 00:19:21.523 }, 00:19:21.523 { 00:19:21.523 "method": "bdev_raid_set_options", 00:19:21.523 "params": { 00:19:21.523 "process_window_size_kb": 1024, 00:19:21.523 "process_max_bandwidth_mb_sec": 0 00:19:21.523 } 00:19:21.523 }, 00:19:21.523 { 00:19:21.523 "method": "bdev_iscsi_set_options", 00:19:21.523 "params": { 00:19:21.523 "timeout_sec": 30 00:19:21.523 } 00:19:21.523 }, 00:19:21.523 { 00:19:21.523 "method": "bdev_nvme_set_options", 00:19:21.523 "params": { 00:19:21.523 "action_on_timeout": "none", 00:19:21.523 "timeout_us": 0, 00:19:21.523 "timeout_admin_us": 0, 00:19:21.523 "keep_alive_timeout_ms": 10000, 00:19:21.523 "arbitration_burst": 0, 00:19:21.523 "low_priority_weight": 0, 00:19:21.523 "medium_priority_weight": 0, 00:19:21.523 "high_priority_weight": 0, 00:19:21.523 "nvme_adminq_poll_period_us": 10000, 00:19:21.523 "nvme_ioq_poll_period_us": 0, 00:19:21.523 "io_queue_requests": 0, 00:19:21.523 "delay_cmd_submit": true, 00:19:21.523 "transport_retry_count": 4, 00:19:21.523 "bdev_retry_count": 3, 00:19:21.523 "transport_ack_timeout": 0, 00:19:21.523 "ctrlr_loss_timeout_sec": 0, 00:19:21.523 "reconnect_delay_sec": 0, 00:19:21.523 "fast_io_fail_timeout_sec": 0, 00:19:21.523 "disable_auto_failback": false, 00:19:21.523 "generate_uuids": false, 00:19:21.524 "transport_tos": 0, 00:19:21.524 "nvme_error_stat": false, 00:19:21.524 "rdma_srq_size": 0, 00:19:21.524 "io_path_stat": false, 00:19:21.524 "allow_accel_sequence": false, 00:19:21.524 "rdma_max_cq_size": 0, 00:19:21.524 "rdma_cm_event_timeout_ms": 0, 00:19:21.524 "dhchap_digests": [ 00:19:21.524 "sha256", 00:19:21.524 "sha384", 00:19:21.524 "sha512" 00:19:21.524 ], 00:19:21.524 "dhchap_dhgroups": [ 00:19:21.524 "null", 00:19:21.524 "ffdhe2048", 00:19:21.524 "ffdhe3072", 00:19:21.524 "ffdhe4096", 00:19:21.524 "ffdhe6144", 00:19:21.524 "ffdhe8192" 00:19:21.524 ] 00:19:21.524 } 00:19:21.524 }, 00:19:21.524 { 00:19:21.524 "method": "bdev_nvme_set_hotplug", 00:19:21.524 "params": { 00:19:21.524 "period_us": 100000, 00:19:21.524 "enable": false 00:19:21.524 } 00:19:21.524 }, 00:19:21.524 { 00:19:21.524 "method": "bdev_malloc_create", 00:19:21.524 "params": { 00:19:21.524 "name": "malloc0", 00:19:21.524 "num_blocks": 8192, 00:19:21.524 "block_size": 4096, 00:19:21.524 "physical_block_size": 4096, 00:19:21.524 "uuid": "e508cec9-7c4d-4546-9b83-99dc03477151", 00:19:21.524 "optimal_io_boundary": 0, 00:19:21.524 "md_size": 0, 00:19:21.524 "dif_type": 0, 00:19:21.524 "dif_is_head_of_md": false, 00:19:21.524 "dif_pi_format": 0 00:19:21.524 } 00:19:21.524 }, 00:19:21.524 { 00:19:21.524 "method": "bdev_wait_for_examine" 00:19:21.524 } 00:19:21.524 ] 00:19:21.524 }, 00:19:21.524 { 00:19:21.524 "subsystem": "nbd", 00:19:21.524 "config": [] 00:19:21.524 }, 00:19:21.524 { 00:19:21.524 "subsystem": "scheduler", 00:19:21.524 "config": [ 00:19:21.524 { 00:19:21.524 "method": "framework_set_scheduler", 00:19:21.524 "params": { 00:19:21.524 "name": "static" 00:19:21.524 } 00:19:21.524 } 00:19:21.524 ] 00:19:21.524 }, 00:19:21.524 { 00:19:21.524 "subsystem": "nvmf", 00:19:21.524 "config": [ 00:19:21.524 { 00:19:21.524 "method": "nvmf_set_config", 00:19:21.524 "params": { 00:19:21.524 "discovery_filter": "match_any", 00:19:21.524 "admin_cmd_passthru": { 00:19:21.524 "identify_ctrlr": false 00:19:21.524 }, 00:19:21.524 "dhchap_digests": [ 00:19:21.524 "sha256", 00:19:21.524 "sha384", 00:19:21.524 "sha512" 00:19:21.524 ], 00:19:21.524 "dhchap_dhgroups": [ 00:19:21.524 "null", 00:19:21.524 "ffdhe2048", 00:19:21.524 "ffdhe3072", 00:19:21.524 "ffdhe4096", 00:19:21.524 "ffdhe6144", 00:19:21.524 "ffdhe8192" 00:19:21.524 ] 00:19:21.524 } 00:19:21.524 }, 00:19:21.524 { 00:19:21.524 "method": "nvmf_set_max_subsystems", 00:19:21.524 "params": { 00:19:21.524 "max_subsystems": 1024 00:19:21.524 } 00:19:21.524 }, 00:19:21.524 { 00:19:21.524 "method": "nvmf_set_crdt", 00:19:21.524 "params": { 00:19:21.524 "crdt1": 0, 00:19:21.524 "crdt2": 0, 00:19:21.524 "crdt3": 0 00:19:21.524 } 00:19:21.524 }, 00:19:21.524 { 00:19:21.524 "method": "nvmf_create_transport", 00:19:21.524 "params": { 00:19:21.524 "trtype": "TCP", 00:19:21.524 "max_queue_depth": 128, 00:19:21.524 "max_io_qpairs_per_ctrlr": 127, 00:19:21.524 "in_capsule_data_size": 4096, 00:19:21.524 "max_io_size": 131072, 00:19:21.524 "io_unit_size": 131072, 00:19:21.524 "max_aq_depth": 128, 00:19:21.524 "num_shared_buffers": 511, 00:19:21.524 "buf_cache_size": 4294967295, 00:19:21.524 "dif_insert_or_strip": false, 00:19:21.524 "zcopy": false, 00:19:21.524 "c2h_success": false, 00:19:21.524 "sock_priority": 0, 00:19:21.524 "abort_timeout_sec": 1, 00:19:21.524 "ack_timeout": 0, 00:19:21.524 "data_wr_pool_size": 0 00:19:21.524 } 00:19:21.524 }, 00:19:21.524 { 00:19:21.524 "method": "nvmf_create_subsystem", 00:19:21.524 "params": { 00:19:21.524 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.524 "allow_any_host": false, 00:19:21.524 "serial_number": "SPDK00000000000001", 00:19:21.524 "model_number": "SPDK bdev Controller", 00:19:21.524 "max_namespaces": 10, 00:19:21.524 "min_cntlid": 1, 00:19:21.524 "max_cntlid": 65519, 00:19:21.524 "ana_reporting": false 00:19:21.524 } 00:19:21.524 }, 00:19:21.524 { 00:19:21.524 "method": "nvmf_subsystem_add_host", 00:19:21.524 "params": { 00:19:21.524 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.524 "host": "nqn.2016-06.io.spdk:host1", 00:19:21.524 "psk": "key0" 00:19:21.524 } 00:19:21.524 }, 00:19:21.524 { 00:19:21.524 "method": "nvmf_subsystem_add_ns", 00:19:21.524 "params": { 00:19:21.524 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.524 "namespace": { 00:19:21.524 "nsid": 1, 00:19:21.524 "bdev_name": "malloc0", 00:19:21.524 "nguid": "E508CEC97C4D45469B8399DC03477151", 00:19:21.524 "uuid": "e508cec9-7c4d-4546-9b83-99dc03477151", 00:19:21.524 "no_auto_visible": false 00:19:21.524 } 00:19:21.524 } 00:19:21.524 }, 00:19:21.524 { 00:19:21.524 "method": "nvmf_subsystem_add_listener", 00:19:21.524 "params": { 00:19:21.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.525 "listen_address": { 00:19:21.525 "trtype": "TCP", 00:19:21.525 "adrfam": "IPv4", 00:19:21.525 "traddr": "10.0.0.2", 00:19:21.525 "trsvcid": "4420" 00:19:21.525 }, 00:19:21.525 "secure_channel": true 00:19:21.525 } 00:19:21.525 } 00:19:21.525 ] 00:19:21.525 } 00:19:21.525 ] 00:19:21.525 }' 00:19:21.525 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3984940 00:19:21.525 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3984940 00:19:21.525 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:21.525 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3984940 ']' 00:19:21.525 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.525 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.525 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.525 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.525 15:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.525 [2024-10-01 15:16:31.225096] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:21.525 [2024-10-01 15:16:31.225148] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.525 [2024-10-01 15:16:31.306329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.525 [2024-10-01 15:16:31.359189] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.525 [2024-10-01 15:16:31.359223] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.525 [2024-10-01 15:16:31.359229] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.525 [2024-10-01 15:16:31.359236] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.525 [2024-10-01 15:16:31.359240] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.525 [2024-10-01 15:16:31.359288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.786 [2024-10-01 15:16:31.575688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.786 [2024-10-01 15:16:31.607661] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:21.786 [2024-10-01 15:16:31.607834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3985288 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3985288 /var/tmp/bdevperf.sock 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3985288 ']' 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.357 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:22.357 "subsystems": [ 00:19:22.357 { 00:19:22.357 "subsystem": "keyring", 00:19:22.357 "config": [ 00:19:22.357 { 00:19:22.357 "method": "keyring_file_add_key", 00:19:22.357 "params": { 00:19:22.357 "name": "key0", 00:19:22.357 "path": "/tmp/tmp.756kRTrJb0" 00:19:22.357 } 00:19:22.357 } 00:19:22.357 ] 00:19:22.357 }, 00:19:22.357 { 00:19:22.357 "subsystem": "iobuf", 00:19:22.357 "config": [ 00:19:22.357 { 00:19:22.357 "method": "iobuf_set_options", 00:19:22.357 "params": { 00:19:22.357 "small_pool_count": 8192, 00:19:22.357 "large_pool_count": 1024, 00:19:22.357 "small_bufsize": 8192, 00:19:22.357 "large_bufsize": 135168 00:19:22.357 } 00:19:22.357 } 00:19:22.357 ] 00:19:22.357 }, 00:19:22.357 { 00:19:22.357 "subsystem": "sock", 00:19:22.357 "config": [ 00:19:22.357 { 00:19:22.357 "method": "sock_set_default_impl", 00:19:22.357 "params": { 00:19:22.357 "impl_name": "posix" 00:19:22.357 } 00:19:22.357 }, 00:19:22.357 { 00:19:22.357 "method": "sock_impl_set_options", 00:19:22.357 "params": { 00:19:22.357 "impl_name": "ssl", 00:19:22.357 "recv_buf_size": 4096, 00:19:22.357 "send_buf_size": 4096, 00:19:22.357 "enable_recv_pipe": true, 00:19:22.357 "enable_quickack": false, 00:19:22.357 "enable_placement_id": 0, 00:19:22.357 "enable_zerocopy_send_server": true, 00:19:22.357 "enable_zerocopy_send_client": false, 00:19:22.357 "zerocopy_threshold": 0, 00:19:22.357 "tls_version": 0, 00:19:22.357 "enable_ktls": false 00:19:22.357 } 00:19:22.357 }, 00:19:22.357 { 00:19:22.357 "method": "sock_impl_set_options", 00:19:22.357 "params": { 00:19:22.357 "impl_name": "posix", 00:19:22.357 "recv_buf_size": 2097152, 00:19:22.357 "send_buf_size": 2097152, 00:19:22.357 "enable_recv_pipe": true, 00:19:22.357 "enable_quickack": false, 00:19:22.357 "enable_placement_id": 0, 00:19:22.357 "enable_zerocopy_send_server": true, 00:19:22.357 "enable_zerocopy_send_client": false, 00:19:22.357 "zerocopy_threshold": 0, 00:19:22.357 "tls_version": 0, 00:19:22.357 "enable_ktls": false 00:19:22.357 } 00:19:22.357 } 00:19:22.357 ] 00:19:22.357 }, 00:19:22.357 { 00:19:22.357 "subsystem": "vmd", 00:19:22.357 "config": [] 00:19:22.357 }, 00:19:22.357 { 00:19:22.357 "subsystem": "accel", 00:19:22.357 "config": [ 00:19:22.357 { 00:19:22.357 "method": "accel_set_options", 00:19:22.357 "params": { 00:19:22.357 "small_cache_size": 128, 00:19:22.357 "large_cache_size": 16, 00:19:22.357 "task_count": 2048, 00:19:22.357 "sequence_count": 2048, 00:19:22.357 "buf_count": 2048 00:19:22.357 } 00:19:22.357 } 00:19:22.357 ] 00:19:22.357 }, 00:19:22.357 { 00:19:22.357 "subsystem": "bdev", 00:19:22.357 "config": [ 00:19:22.357 { 00:19:22.357 "method": "bdev_set_options", 00:19:22.357 "params": { 00:19:22.357 "bdev_io_pool_size": 65535, 00:19:22.357 "bdev_io_cache_size": 256, 00:19:22.357 "bdev_auto_examine": true, 00:19:22.357 "iobuf_small_cache_size": 128, 00:19:22.357 "iobuf_large_cache_size": 16 00:19:22.357 } 00:19:22.357 }, 00:19:22.357 { 00:19:22.357 "method": "bdev_raid_set_options", 00:19:22.357 "params": { 00:19:22.357 "process_window_size_kb": 1024, 00:19:22.357 "process_max_bandwidth_mb_sec": 0 00:19:22.357 } 00:19:22.357 }, 00:19:22.357 { 00:19:22.357 "method": "bdev_iscsi_set_options", 00:19:22.357 "params": { 00:19:22.357 "timeout_sec": 30 00:19:22.357 } 00:19:22.357 }, 00:19:22.357 { 00:19:22.357 "method": "bdev_nvme_set_options", 00:19:22.357 "params": { 00:19:22.357 "action_on_timeout": "none", 00:19:22.357 "timeout_us": 0, 00:19:22.357 "timeout_admin_us": 0, 00:19:22.357 "keep_alive_timeout_ms": 10000, 00:19:22.357 "arbitration_burst": 0, 00:19:22.357 "low_priority_weight": 0, 00:19:22.357 "medium_priority_weight": 0, 00:19:22.357 "high_priority_weight": 0, 00:19:22.357 "nvme_adminq_poll_period_us": 10000, 00:19:22.357 "nvme_ioq_poll_period_us": 0, 00:19:22.357 "io_queue_requests": 512, 00:19:22.357 "delay_cmd_submit": true, 00:19:22.357 "transport_retry_count": 4, 00:19:22.357 "bdev_retry_count": 3, 00:19:22.357 "transport_ack_timeout": 0, 00:19:22.357 "ctrlr_loss_timeout_sec": 0, 00:19:22.357 "reconnect_delay_sec": 0, 00:19:22.357 "fast_io_fail_timeout_sec": 0, 00:19:22.357 "disable_auto_failback": false, 00:19:22.357 "generate_uuids": false, 00:19:22.357 "transport_tos": 0, 00:19:22.357 "nvme_error_stat": false, 00:19:22.357 "rdma_srq_size": 0, 00:19:22.358 "io_path_stat": false, 00:19:22.358 "allow_accel_sequence": false, 00:19:22.358 "rdma_max_cq_size": 0, 00:19:22.358 "rdma_cm_event_timeout_ms": 0, 00:19:22.358 "dhchap_digests": [ 00:19:22.358 "sha256", 00:19:22.358 "sha384", 00:19:22.358 "sha512" 00:19:22.358 ], 00:19:22.358 "dhchap_dhgroups": [ 00:19:22.358 "null", 00:19:22.358 "ffdhe2048", 00:19:22.358 "ffdhe3072", 00:19:22.358 "ffdhe4096", 00:19:22.358 "ffdhe6144", 00:19:22.358 "ffdhe8192" 00:19:22.358 ] 00:19:22.358 } 00:19:22.358 }, 00:19:22.358 { 00:19:22.358 "method": "bdev_nvme_attach_controller", 00:19:22.358 "params": { 00:19:22.358 "name": "TLSTEST", 00:19:22.358 "trtype": "TCP", 00:19:22.358 "adrfam": "IPv4", 00:19:22.358 "traddr": "10.0.0.2", 00:19:22.358 "trsvcid": "4420", 00:19:22.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.358 "prchk_reftag": false, 00:19:22.358 "prchk_guard": false, 00:19:22.358 "ctrlr_loss_timeout_sec": 0, 00:19:22.358 "reconnect_delay_sec": 0, 00:19:22.358 "fast_io_fail_timeout_sec": 0, 00:19:22.358 "psk": "key0", 00:19:22.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.358 "hdgst": false, 00:19:22.358 "ddgst": false 00:19:22.358 } 00:19:22.358 }, 00:19:22.358 { 00:19:22.358 "method": "bdev_nvme_set_hotplug", 00:19:22.358 "params": { 00:19:22.358 "period_us": 100000, 00:19:22.358 "enable": false 00:19:22.358 } 00:19:22.358 }, 00:19:22.358 { 00:19:22.358 "method": "bdev_wait_for_examine" 00:19:22.358 } 00:19:22.358 ] 00:19:22.358 }, 00:19:22.358 { 00:19:22.358 "subsystem": "nbd", 00:19:22.358 "config": [] 00:19:22.358 } 00:19:22.358 ] 00:19:22.358 }' 00:19:22.358 [2024-10-01 15:16:32.146793] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:22.358 [2024-10-01 15:16:32.146847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985288 ] 00:19:22.358 [2024-10-01 15:16:32.198100] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.618 [2024-10-01 15:16:32.251542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.618 [2024-10-01 15:16:32.386640] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.188 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.188 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:23.188 15:16:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:23.188 Running I/O for 10 seconds... 00:19:33.496 5251.00 IOPS, 20.51 MiB/s 5684.50 IOPS, 22.21 MiB/s 5599.00 IOPS, 21.87 MiB/s 5489.75 IOPS, 21.44 MiB/s 5542.80 IOPS, 21.65 MiB/s 5616.33 IOPS, 21.94 MiB/s 5575.14 IOPS, 21.78 MiB/s 5570.38 IOPS, 21.76 MiB/s 5550.00 IOPS, 21.68 MiB/s 5578.20 IOPS, 21.79 MiB/s 00:19:33.496 Latency(us) 00:19:33.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.496 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:33.496 Verification LBA range: start 0x0 length 0x2000 00:19:33.496 TLSTESTn1 : 10.02 5579.53 21.80 0.00 0.00 22909.66 6007.47 81701.55 00:19:33.496 =================================================================================================================== 00:19:33.496 Total : 5579.53 21.80 0.00 0.00 22909.66 6007.47 81701.55 00:19:33.496 { 00:19:33.496 "results": [ 00:19:33.496 { 00:19:33.496 "job": "TLSTESTn1", 00:19:33.496 "core_mask": "0x4", 00:19:33.496 "workload": "verify", 00:19:33.496 "status": "finished", 00:19:33.496 "verify_range": { 00:19:33.497 "start": 0, 00:19:33.497 "length": 8192 00:19:33.497 }, 00:19:33.497 "queue_depth": 128, 00:19:33.497 "io_size": 4096, 00:19:33.497 "runtime": 10.020553, 00:19:33.497 "iops": 5579.532387084824, 00:19:33.497 "mibps": 21.795048387050095, 00:19:33.497 "io_failed": 0, 00:19:33.497 "io_timeout": 0, 00:19:33.497 "avg_latency_us": 22909.660361771894, 00:19:33.497 "min_latency_us": 6007.466666666666, 00:19:33.497 "max_latency_us": 81701.54666666666 00:19:33.497 } 00:19:33.497 ], 00:19:33.497 "core_count": 1 00:19:33.497 } 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3985288 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3985288 ']' 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3985288 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3985288 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3985288' 00:19:33.497 killing process with pid 3985288 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3985288 00:19:33.497 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.497 00:19:33.497 Latency(us) 00:19:33.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.497 =================================================================================================================== 00:19:33.497 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3985288 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3984940 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3984940 ']' 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3984940 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3984940 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3984940' 00:19:33.497 killing process with pid 3984940 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3984940 00:19:33.497 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3984940 00:19:33.758 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:33.758 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:33.758 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:33.758 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.758 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3987336 00:19:33.758 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3987336 00:19:33.758 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3987336 ']' 00:19:33.758 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.758 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.758 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.758 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.758 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.758 15:16:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:33.758 [2024-10-01 15:16:43.527104] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:33.758 [2024-10-01 15:16:43.527160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.758 [2024-10-01 15:16:43.592575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.018 [2024-10-01 15:16:43.656640] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.018 [2024-10-01 15:16:43.656678] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.018 [2024-10-01 15:16:43.656686] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.018 [2024-10-01 15:16:43.656692] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.018 [2024-10-01 15:16:43.656698] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.018 [2024-10-01 15:16:43.656717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.587 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.587 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:34.587 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:34.587 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.587 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.587 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.587 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.756kRTrJb0 00:19:34.587 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.756kRTrJb0 00:19:34.587 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:34.846 [2024-10-01 15:16:44.500827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.846 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:35.106 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:35.106 [2024-10-01 15:16:44.877769] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.106 [2024-10-01 15:16:44.877968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.106 15:16:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:35.366 malloc0 00:19:35.366 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:35.626 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.756kRTrJb0 00:19:35.626 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:35.886 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3987909 00:19:35.886 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:35.886 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:35.886 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3987909 /var/tmp/bdevperf.sock 00:19:35.886 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3987909 ']' 00:19:35.886 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.886 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:35.886 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.886 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:35.886 15:16:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.886 [2024-10-01 15:16:45.707436] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:35.886 [2024-10-01 15:16:45.707508] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3987909 ] 00:19:36.145 [2024-10-01 15:16:45.786151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.145 [2024-10-01 15:16:45.840562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.715 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.715 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:36.715 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.756kRTrJb0 00:19:36.975 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:36.975 [2024-10-01 15:16:46.789076] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.234 nvme0n1 00:19:37.234 15:16:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:37.234 Running I/O for 1 seconds... 00:19:38.174 3922.00 IOPS, 15.32 MiB/s 00:19:38.174 Latency(us) 00:19:38.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.174 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:38.174 Verification LBA range: start 0x0 length 0x2000 00:19:38.174 nvme0n1 : 1.02 3974.95 15.53 0.00 0.00 31980.23 4505.60 75147.95 00:19:38.174 =================================================================================================================== 00:19:38.174 Total : 3974.95 15.53 0.00 0.00 31980.23 4505.60 75147.95 00:19:38.174 { 00:19:38.174 "results": [ 00:19:38.174 { 00:19:38.174 "job": "nvme0n1", 00:19:38.174 "core_mask": "0x2", 00:19:38.174 "workload": "verify", 00:19:38.174 "status": "finished", 00:19:38.174 "verify_range": { 00:19:38.174 "start": 0, 00:19:38.174 "length": 8192 00:19:38.174 }, 00:19:38.174 "queue_depth": 128, 00:19:38.174 "io_size": 4096, 00:19:38.174 "runtime": 1.019132, 00:19:38.174 "iops": 3974.9512330100515, 00:19:38.174 "mibps": 15.527153253945514, 00:19:38.174 "io_failed": 0, 00:19:38.174 "io_timeout": 0, 00:19:38.174 "avg_latency_us": 31980.228618448124, 00:19:38.174 "min_latency_us": 4505.6, 00:19:38.174 "max_latency_us": 75147.94666666667 00:19:38.174 } 00:19:38.174 ], 00:19:38.174 "core_count": 1 00:19:38.174 } 00:19:38.174 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3987909 00:19:38.174 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3987909 ']' 00:19:38.174 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3987909 00:19:38.174 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:38.174 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.174 15:16:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3987909 00:19:38.433 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:38.433 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:38.433 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3987909' 00:19:38.433 killing process with pid 3987909 00:19:38.433 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3987909 00:19:38.434 Received shutdown signal, test time was about 1.000000 seconds 00:19:38.434 00:19:38.434 Latency(us) 00:19:38.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.434 =================================================================================================================== 00:19:38.434 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:38.434 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3987909 00:19:38.434 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3987336 00:19:38.434 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3987336 ']' 00:19:38.434 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3987336 00:19:38.434 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:38.434 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.434 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3987336 00:19:38.434 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:38.434 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:38.434 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3987336' 00:19:38.434 killing process with pid 3987336 00:19:38.434 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3987336 00:19:38.434 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3987336 00:19:38.693 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:38.693 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:38.693 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.693 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.693 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3988357 00:19:38.693 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3988357 00:19:38.693 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:38.693 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3988357 ']' 00:19:38.693 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.693 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.693 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.693 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.693 15:16:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.693 [2024-10-01 15:16:48.440970] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:38.693 [2024-10-01 15:16:48.441032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.694 [2024-10-01 15:16:48.507568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.953 [2024-10-01 15:16:48.570770] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.953 [2024-10-01 15:16:48.570812] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.953 [2024-10-01 15:16:48.570820] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.953 [2024-10-01 15:16:48.570827] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.953 [2024-10-01 15:16:48.570833] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.953 [2024-10-01 15:16:48.570852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.523 [2024-10-01 15:16:49.291076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.523 malloc0 00:19:39.523 [2024-10-01 15:16:49.329689] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:39.523 [2024-10-01 15:16:49.329897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3988706 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3988706 /var/tmp/bdevperf.sock 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3988706 ']' 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.523 15:16:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:39.783 [2024-10-01 15:16:49.407564] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:39.783 [2024-10-01 15:16:49.407613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3988706 ] 00:19:39.783 [2024-10-01 15:16:49.484726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.783 [2024-10-01 15:16:49.538587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.352 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.352 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:40.352 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.756kRTrJb0 00:19:40.613 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:40.873 [2024-10-01 15:16:50.531244] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.873 nvme0n1 00:19:40.873 15:16:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:40.873 Running I/O for 1 seconds... 00:19:42.261 3906.00 IOPS, 15.26 MiB/s 00:19:42.261 Latency(us) 00:19:42.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.261 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:42.261 Verification LBA range: start 0x0 length 0x2000 00:19:42.261 nvme0n1 : 1.05 3832.98 14.97 0.00 0.00 32664.67 6171.31 67283.63 00:19:42.261 =================================================================================================================== 00:19:42.261 Total : 3832.98 14.97 0.00 0.00 32664.67 6171.31 67283.63 00:19:42.261 { 00:19:42.261 "results": [ 00:19:42.261 { 00:19:42.261 "job": "nvme0n1", 00:19:42.261 "core_mask": "0x2", 00:19:42.261 "workload": "verify", 00:19:42.261 "status": "finished", 00:19:42.261 "verify_range": { 00:19:42.261 "start": 0, 00:19:42.261 "length": 8192 00:19:42.261 }, 00:19:42.261 "queue_depth": 128, 00:19:42.261 "io_size": 4096, 00:19:42.261 "runtime": 1.052707, 00:19:42.261 "iops": 3832.975367314932, 00:19:42.261 "mibps": 14.972560028573954, 00:19:42.261 "io_failed": 0, 00:19:42.261 "io_timeout": 0, 00:19:42.261 "avg_latency_us": 32664.667783560508, 00:19:42.261 "min_latency_us": 6171.306666666666, 00:19:42.261 "max_latency_us": 67283.62666666666 00:19:42.261 } 00:19:42.261 ], 00:19:42.261 "core_count": 1 00:19:42.261 } 00:19:42.261 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:42.261 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.261 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.261 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.261 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:42.261 "subsystems": [ 00:19:42.261 { 00:19:42.261 "subsystem": "keyring", 00:19:42.261 "config": [ 00:19:42.261 { 00:19:42.261 "method": "keyring_file_add_key", 00:19:42.261 "params": { 00:19:42.261 "name": "key0", 00:19:42.261 "path": "/tmp/tmp.756kRTrJb0" 00:19:42.261 } 00:19:42.261 } 00:19:42.261 ] 00:19:42.261 }, 00:19:42.261 { 00:19:42.261 "subsystem": "iobuf", 00:19:42.261 "config": [ 00:19:42.261 { 00:19:42.261 "method": "iobuf_set_options", 00:19:42.261 "params": { 00:19:42.261 "small_pool_count": 8192, 00:19:42.261 "large_pool_count": 1024, 00:19:42.261 "small_bufsize": 8192, 00:19:42.261 "large_bufsize": 135168 00:19:42.261 } 00:19:42.261 } 00:19:42.261 ] 00:19:42.261 }, 00:19:42.261 { 00:19:42.261 "subsystem": "sock", 00:19:42.261 "config": [ 00:19:42.261 { 00:19:42.261 "method": "sock_set_default_impl", 00:19:42.261 "params": { 00:19:42.261 "impl_name": "posix" 00:19:42.261 } 00:19:42.261 }, 00:19:42.261 { 00:19:42.261 "method": "sock_impl_set_options", 00:19:42.261 "params": { 00:19:42.261 "impl_name": "ssl", 00:19:42.261 "recv_buf_size": 4096, 00:19:42.261 "send_buf_size": 4096, 00:19:42.261 "enable_recv_pipe": true, 00:19:42.261 "enable_quickack": false, 00:19:42.261 "enable_placement_id": 0, 00:19:42.261 "enable_zerocopy_send_server": true, 00:19:42.261 "enable_zerocopy_send_client": false, 00:19:42.261 "zerocopy_threshold": 0, 00:19:42.261 "tls_version": 0, 00:19:42.261 "enable_ktls": false 00:19:42.261 } 00:19:42.261 }, 00:19:42.261 { 00:19:42.261 "method": "sock_impl_set_options", 00:19:42.261 "params": { 00:19:42.261 "impl_name": "posix", 00:19:42.261 "recv_buf_size": 2097152, 00:19:42.261 "send_buf_size": 2097152, 00:19:42.261 "enable_recv_pipe": true, 00:19:42.261 "enable_quickack": false, 00:19:42.261 "enable_placement_id": 0, 00:19:42.261 "enable_zerocopy_send_server": true, 00:19:42.261 "enable_zerocopy_send_client": false, 00:19:42.261 "zerocopy_threshold": 0, 00:19:42.261 "tls_version": 0, 00:19:42.261 "enable_ktls": false 00:19:42.261 } 00:19:42.261 } 00:19:42.261 ] 00:19:42.261 }, 00:19:42.261 { 00:19:42.261 "subsystem": "vmd", 00:19:42.261 "config": [] 00:19:42.261 }, 00:19:42.261 { 00:19:42.261 "subsystem": "accel", 00:19:42.261 "config": [ 00:19:42.261 { 00:19:42.261 "method": "accel_set_options", 00:19:42.261 "params": { 00:19:42.261 "small_cache_size": 128, 00:19:42.261 "large_cache_size": 16, 00:19:42.261 "task_count": 2048, 00:19:42.261 "sequence_count": 2048, 00:19:42.261 "buf_count": 2048 00:19:42.261 } 00:19:42.261 } 00:19:42.261 ] 00:19:42.261 }, 00:19:42.261 { 00:19:42.261 "subsystem": "bdev", 00:19:42.261 "config": [ 00:19:42.261 { 00:19:42.261 "method": "bdev_set_options", 00:19:42.261 "params": { 00:19:42.261 "bdev_io_pool_size": 65535, 00:19:42.261 "bdev_io_cache_size": 256, 00:19:42.261 "bdev_auto_examine": true, 00:19:42.261 "iobuf_small_cache_size": 128, 00:19:42.261 "iobuf_large_cache_size": 16 00:19:42.261 } 00:19:42.261 }, 00:19:42.261 { 00:19:42.261 "method": "bdev_raid_set_options", 00:19:42.261 "params": { 00:19:42.261 "process_window_size_kb": 1024, 00:19:42.261 "process_max_bandwidth_mb_sec": 0 00:19:42.261 } 00:19:42.261 }, 00:19:42.261 { 00:19:42.261 "method": "bdev_iscsi_set_options", 00:19:42.261 "params": { 00:19:42.261 "timeout_sec": 30 00:19:42.261 } 00:19:42.261 }, 00:19:42.261 { 00:19:42.261 "method": "bdev_nvme_set_options", 00:19:42.261 "params": { 00:19:42.261 "action_on_timeout": "none", 00:19:42.261 "timeout_us": 0, 00:19:42.261 "timeout_admin_us": 0, 00:19:42.261 "keep_alive_timeout_ms": 10000, 00:19:42.261 "arbitration_burst": 0, 00:19:42.261 "low_priority_weight": 0, 00:19:42.261 "medium_priority_weight": 0, 00:19:42.261 "high_priority_weight": 0, 00:19:42.261 "nvme_adminq_poll_period_us": 10000, 00:19:42.261 "nvme_ioq_poll_period_us": 0, 00:19:42.261 "io_queue_requests": 0, 00:19:42.261 "delay_cmd_submit": true, 00:19:42.261 "transport_retry_count": 4, 00:19:42.261 "bdev_retry_count": 3, 00:19:42.261 "transport_ack_timeout": 0, 00:19:42.261 "ctrlr_loss_timeout_sec": 0, 00:19:42.261 "reconnect_delay_sec": 0, 00:19:42.261 "fast_io_fail_timeout_sec": 0, 00:19:42.262 "disable_auto_failback": false, 00:19:42.262 "generate_uuids": false, 00:19:42.262 "transport_tos": 0, 00:19:42.262 "nvme_error_stat": false, 00:19:42.262 "rdma_srq_size": 0, 00:19:42.262 "io_path_stat": false, 00:19:42.262 "allow_accel_sequence": false, 00:19:42.262 "rdma_max_cq_size": 0, 00:19:42.262 "rdma_cm_event_timeout_ms": 0, 00:19:42.262 "dhchap_digests": [ 00:19:42.262 "sha256", 00:19:42.262 "sha384", 00:19:42.262 "sha512" 00:19:42.262 ], 00:19:42.262 "dhchap_dhgroups": [ 00:19:42.262 "null", 00:19:42.262 "ffdhe2048", 00:19:42.262 "ffdhe3072", 00:19:42.262 "ffdhe4096", 00:19:42.262 "ffdhe6144", 00:19:42.262 "ffdhe8192" 00:19:42.262 ] 00:19:42.262 } 00:19:42.262 }, 00:19:42.262 { 00:19:42.262 "method": "bdev_nvme_set_hotplug", 00:19:42.262 "params": { 00:19:42.262 "period_us": 100000, 00:19:42.262 "enable": false 00:19:42.262 } 00:19:42.262 }, 00:19:42.262 { 00:19:42.262 "method": "bdev_malloc_create", 00:19:42.262 "params": { 00:19:42.262 "name": "malloc0", 00:19:42.262 "num_blocks": 8192, 00:19:42.262 "block_size": 4096, 00:19:42.262 "physical_block_size": 4096, 00:19:42.262 "uuid": "5731b7fc-86a5-4663-8d78-6d880249029d", 00:19:42.262 "optimal_io_boundary": 0, 00:19:42.262 "md_size": 0, 00:19:42.262 "dif_type": 0, 00:19:42.262 "dif_is_head_of_md": false, 00:19:42.262 "dif_pi_format": 0 00:19:42.262 } 00:19:42.262 }, 00:19:42.262 { 00:19:42.262 "method": "bdev_wait_for_examine" 00:19:42.262 } 00:19:42.262 ] 00:19:42.262 }, 00:19:42.262 { 00:19:42.262 "subsystem": "nbd", 00:19:42.262 "config": [] 00:19:42.262 }, 00:19:42.262 { 00:19:42.262 "subsystem": "scheduler", 00:19:42.262 "config": [ 00:19:42.262 { 00:19:42.262 "method": "framework_set_scheduler", 00:19:42.262 "params": { 00:19:42.262 "name": "static" 00:19:42.262 } 00:19:42.262 } 00:19:42.262 ] 00:19:42.262 }, 00:19:42.262 { 00:19:42.262 "subsystem": "nvmf", 00:19:42.262 "config": [ 00:19:42.262 { 00:19:42.262 "method": "nvmf_set_config", 00:19:42.262 "params": { 00:19:42.262 "discovery_filter": "match_any", 00:19:42.262 "admin_cmd_passthru": { 00:19:42.262 "identify_ctrlr": false 00:19:42.262 }, 00:19:42.262 "dhchap_digests": [ 00:19:42.262 "sha256", 00:19:42.262 "sha384", 00:19:42.262 "sha512" 00:19:42.262 ], 00:19:42.262 "dhchap_dhgroups": [ 00:19:42.262 "null", 00:19:42.262 "ffdhe2048", 00:19:42.262 "ffdhe3072", 00:19:42.262 "ffdhe4096", 00:19:42.262 "ffdhe6144", 00:19:42.262 "ffdhe8192" 00:19:42.262 ] 00:19:42.262 } 00:19:42.262 }, 00:19:42.262 { 00:19:42.262 "method": "nvmf_set_max_subsystems", 00:19:42.262 "params": { 00:19:42.262 "max_subsystems": 1024 00:19:42.262 } 00:19:42.262 }, 00:19:42.262 { 00:19:42.262 "method": "nvmf_set_crdt", 00:19:42.262 "params": { 00:19:42.262 "crdt1": 0, 00:19:42.262 "crdt2": 0, 00:19:42.262 "crdt3": 0 00:19:42.262 } 00:19:42.262 }, 00:19:42.262 { 00:19:42.262 "method": "nvmf_create_transport", 00:19:42.262 "params": { 00:19:42.262 "trtype": "TCP", 00:19:42.262 "max_queue_depth": 128, 00:19:42.262 "max_io_qpairs_per_ctrlr": 127, 00:19:42.262 "in_capsule_data_size": 4096, 00:19:42.262 "max_io_size": 131072, 00:19:42.262 "io_unit_size": 131072, 00:19:42.262 "max_aq_depth": 128, 00:19:42.262 "num_shared_buffers": 511, 00:19:42.262 "buf_cache_size": 4294967295, 00:19:42.262 "dif_insert_or_strip": false, 00:19:42.262 "zcopy": false, 00:19:42.262 "c2h_success": false, 00:19:42.262 "sock_priority": 0, 00:19:42.262 "abort_timeout_sec": 1, 00:19:42.262 "ack_timeout": 0, 00:19:42.262 "data_wr_pool_size": 0 00:19:42.262 } 00:19:42.262 }, 00:19:42.262 { 00:19:42.262 "method": "nvmf_create_subsystem", 00:19:42.262 "params": { 00:19:42.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.262 "allow_any_host": false, 00:19:42.262 "serial_number": "00000000000000000000", 00:19:42.262 "model_number": "SPDK bdev Controller", 00:19:42.262 "max_namespaces": 32, 00:19:42.262 "min_cntlid": 1, 00:19:42.262 "max_cntlid": 65519, 00:19:42.262 "ana_reporting": false 00:19:42.262 } 00:19:42.262 }, 00:19:42.262 { 00:19:42.262 "method": "nvmf_subsystem_add_host", 00:19:42.262 "params": { 00:19:42.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.262 "host": "nqn.2016-06.io.spdk:host1", 00:19:42.262 "psk": "key0" 00:19:42.262 } 00:19:42.262 }, 00:19:42.262 { 00:19:42.262 "method": "nvmf_subsystem_add_ns", 00:19:42.262 "params": { 00:19:42.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.262 "namespace": { 00:19:42.262 "nsid": 1, 00:19:42.262 "bdev_name": "malloc0", 00:19:42.262 "nguid": "5731B7FC86A546638D786D880249029D", 00:19:42.262 "uuid": "5731b7fc-86a5-4663-8d78-6d880249029d", 00:19:42.262 "no_auto_visible": false 00:19:42.262 } 00:19:42.262 } 00:19:42.262 }, 00:19:42.262 { 00:19:42.262 "method": "nvmf_subsystem_add_listener", 00:19:42.262 "params": { 00:19:42.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.262 "listen_address": { 00:19:42.262 "trtype": "TCP", 00:19:42.262 "adrfam": "IPv4", 00:19:42.262 "traddr": "10.0.0.2", 00:19:42.262 "trsvcid": "4420" 00:19:42.262 }, 00:19:42.262 "secure_channel": false, 00:19:42.262 "sock_impl": "ssl" 00:19:42.262 } 00:19:42.262 } 00:19:42.262 ] 00:19:42.262 } 00:19:42.262 ] 00:19:42.262 }' 00:19:42.262 15:16:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:42.623 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:42.623 "subsystems": [ 00:19:42.623 { 00:19:42.623 "subsystem": "keyring", 00:19:42.623 "config": [ 00:19:42.623 { 00:19:42.623 "method": "keyring_file_add_key", 00:19:42.623 "params": { 00:19:42.623 "name": "key0", 00:19:42.623 "path": "/tmp/tmp.756kRTrJb0" 00:19:42.623 } 00:19:42.623 } 00:19:42.623 ] 00:19:42.623 }, 00:19:42.623 { 00:19:42.623 "subsystem": "iobuf", 00:19:42.623 "config": [ 00:19:42.623 { 00:19:42.623 "method": "iobuf_set_options", 00:19:42.623 "params": { 00:19:42.623 "small_pool_count": 8192, 00:19:42.623 "large_pool_count": 1024, 00:19:42.623 "small_bufsize": 8192, 00:19:42.623 "large_bufsize": 135168 00:19:42.623 } 00:19:42.623 } 00:19:42.623 ] 00:19:42.623 }, 00:19:42.623 { 00:19:42.623 "subsystem": "sock", 00:19:42.623 "config": [ 00:19:42.623 { 00:19:42.623 "method": "sock_set_default_impl", 00:19:42.623 "params": { 00:19:42.623 "impl_name": "posix" 00:19:42.623 } 00:19:42.623 }, 00:19:42.623 { 00:19:42.623 "method": "sock_impl_set_options", 00:19:42.623 "params": { 00:19:42.623 "impl_name": "ssl", 00:19:42.623 "recv_buf_size": 4096, 00:19:42.623 "send_buf_size": 4096, 00:19:42.623 "enable_recv_pipe": true, 00:19:42.623 "enable_quickack": false, 00:19:42.623 "enable_placement_id": 0, 00:19:42.623 "enable_zerocopy_send_server": true, 00:19:42.623 "enable_zerocopy_send_client": false, 00:19:42.623 "zerocopy_threshold": 0, 00:19:42.623 "tls_version": 0, 00:19:42.623 "enable_ktls": false 00:19:42.623 } 00:19:42.623 }, 00:19:42.623 { 00:19:42.623 "method": "sock_impl_set_options", 00:19:42.623 "params": { 00:19:42.623 "impl_name": "posix", 00:19:42.623 "recv_buf_size": 2097152, 00:19:42.624 "send_buf_size": 2097152, 00:19:42.624 "enable_recv_pipe": true, 00:19:42.624 "enable_quickack": false, 00:19:42.624 "enable_placement_id": 0, 00:19:42.624 "enable_zerocopy_send_server": true, 00:19:42.624 "enable_zerocopy_send_client": false, 00:19:42.624 "zerocopy_threshold": 0, 00:19:42.624 "tls_version": 0, 00:19:42.624 "enable_ktls": false 00:19:42.624 } 00:19:42.624 } 00:19:42.624 ] 00:19:42.624 }, 00:19:42.624 { 00:19:42.624 "subsystem": "vmd", 00:19:42.624 "config": [] 00:19:42.624 }, 00:19:42.624 { 00:19:42.624 "subsystem": "accel", 00:19:42.624 "config": [ 00:19:42.624 { 00:19:42.624 "method": "accel_set_options", 00:19:42.624 "params": { 00:19:42.624 "small_cache_size": 128, 00:19:42.624 "large_cache_size": 16, 00:19:42.624 "task_count": 2048, 00:19:42.624 "sequence_count": 2048, 00:19:42.624 "buf_count": 2048 00:19:42.624 } 00:19:42.624 } 00:19:42.624 ] 00:19:42.624 }, 00:19:42.624 { 00:19:42.624 "subsystem": "bdev", 00:19:42.624 "config": [ 00:19:42.624 { 00:19:42.624 "method": "bdev_set_options", 00:19:42.624 "params": { 00:19:42.624 "bdev_io_pool_size": 65535, 00:19:42.624 "bdev_io_cache_size": 256, 00:19:42.624 "bdev_auto_examine": true, 00:19:42.624 "iobuf_small_cache_size": 128, 00:19:42.624 "iobuf_large_cache_size": 16 00:19:42.624 } 00:19:42.624 }, 00:19:42.624 { 00:19:42.624 "method": "bdev_raid_set_options", 00:19:42.624 "params": { 00:19:42.624 "process_window_size_kb": 1024, 00:19:42.624 "process_max_bandwidth_mb_sec": 0 00:19:42.624 } 00:19:42.624 }, 00:19:42.624 { 00:19:42.624 "method": "bdev_iscsi_set_options", 00:19:42.624 "params": { 00:19:42.624 "timeout_sec": 30 00:19:42.624 } 00:19:42.624 }, 00:19:42.624 { 00:19:42.624 "method": "bdev_nvme_set_options", 00:19:42.624 "params": { 00:19:42.624 "action_on_timeout": "none", 00:19:42.624 "timeout_us": 0, 00:19:42.624 "timeout_admin_us": 0, 00:19:42.624 "keep_alive_timeout_ms": 10000, 00:19:42.624 "arbitration_burst": 0, 00:19:42.624 "low_priority_weight": 0, 00:19:42.624 "medium_priority_weight": 0, 00:19:42.624 "high_priority_weight": 0, 00:19:42.624 "nvme_adminq_poll_period_us": 10000, 00:19:42.624 "nvme_ioq_poll_period_us": 0, 00:19:42.624 "io_queue_requests": 512, 00:19:42.624 "delay_cmd_submit": true, 00:19:42.624 "transport_retry_count": 4, 00:19:42.624 "bdev_retry_count": 3, 00:19:42.624 "transport_ack_timeout": 0, 00:19:42.624 "ctrlr_loss_timeout_sec": 0, 00:19:42.624 "reconnect_delay_sec": 0, 00:19:42.624 "fast_io_fail_timeout_sec": 0, 00:19:42.624 "disable_auto_failback": false, 00:19:42.624 "generate_uuids": false, 00:19:42.624 "transport_tos": 0, 00:19:42.624 "nvme_error_stat": false, 00:19:42.624 "rdma_srq_size": 0, 00:19:42.624 "io_path_stat": false, 00:19:42.624 "allow_accel_sequence": false, 00:19:42.624 "rdma_max_cq_size": 0, 00:19:42.624 "rdma_cm_event_timeout_ms": 0, 00:19:42.624 "dhchap_digests": [ 00:19:42.624 "sha256", 00:19:42.624 "sha384", 00:19:42.624 "sha512" 00:19:42.624 ], 00:19:42.624 "dhchap_dhgroups": [ 00:19:42.624 "null", 00:19:42.624 "ffdhe2048", 00:19:42.624 "ffdhe3072", 00:19:42.624 "ffdhe4096", 00:19:42.624 "ffdhe6144", 00:19:42.624 "ffdhe8192" 00:19:42.624 ] 00:19:42.624 } 00:19:42.624 }, 00:19:42.624 { 00:19:42.624 "method": "bdev_nvme_attach_controller", 00:19:42.624 "params": { 00:19:42.624 "name": "nvme0", 00:19:42.624 "trtype": "TCP", 00:19:42.624 "adrfam": "IPv4", 00:19:42.624 "traddr": "10.0.0.2", 00:19:42.624 "trsvcid": "4420", 00:19:42.624 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.624 "prchk_reftag": false, 00:19:42.624 "prchk_guard": false, 00:19:42.624 "ctrlr_loss_timeout_sec": 0, 00:19:42.624 "reconnect_delay_sec": 0, 00:19:42.624 "fast_io_fail_timeout_sec": 0, 00:19:42.624 "psk": "key0", 00:19:42.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.624 "hdgst": false, 00:19:42.624 "ddgst": false 00:19:42.624 } 00:19:42.624 }, 00:19:42.624 { 00:19:42.624 "method": "bdev_nvme_set_hotplug", 00:19:42.624 "params": { 00:19:42.624 "period_us": 100000, 00:19:42.624 "enable": false 00:19:42.624 } 00:19:42.624 }, 00:19:42.624 { 00:19:42.624 "method": "bdev_enable_histogram", 00:19:42.624 "params": { 00:19:42.624 "name": "nvme0n1", 00:19:42.624 "enable": true 00:19:42.624 } 00:19:42.624 }, 00:19:42.624 { 00:19:42.624 "method": "bdev_wait_for_examine" 00:19:42.624 } 00:19:42.624 ] 00:19:42.624 }, 00:19:42.624 { 00:19:42.624 "subsystem": "nbd", 00:19:42.624 "config": [] 00:19:42.624 } 00:19:42.624 ] 00:19:42.624 }' 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3988706 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3988706 ']' 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3988706 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3988706 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3988706' 00:19:42.624 killing process with pid 3988706 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3988706 00:19:42.624 Received shutdown signal, test time was about 1.000000 seconds 00:19:42.624 00:19:42.624 Latency(us) 00:19:42.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.624 =================================================================================================================== 00:19:42.624 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3988706 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3988357 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3988357 ']' 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3988357 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3988357 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3988357' 00:19:42.624 killing process with pid 3988357 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3988357 00:19:42.624 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3988357 00:19:42.932 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:42.932 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:42.932 "subsystems": [ 00:19:42.932 { 00:19:42.932 "subsystem": "keyring", 00:19:42.932 "config": [ 00:19:42.932 { 00:19:42.932 "method": "keyring_file_add_key", 00:19:42.932 "params": { 00:19:42.932 "name": "key0", 00:19:42.932 "path": "/tmp/tmp.756kRTrJb0" 00:19:42.932 } 00:19:42.932 } 00:19:42.932 ] 00:19:42.932 }, 00:19:42.932 { 00:19:42.932 "subsystem": "iobuf", 00:19:42.932 "config": [ 00:19:42.932 { 00:19:42.932 "method": "iobuf_set_options", 00:19:42.932 "params": { 00:19:42.932 "small_pool_count": 8192, 00:19:42.932 "large_pool_count": 1024, 00:19:42.932 "small_bufsize": 8192, 00:19:42.932 "large_bufsize": 135168 00:19:42.932 } 00:19:42.932 } 00:19:42.932 ] 00:19:42.932 }, 00:19:42.932 { 00:19:42.932 "subsystem": "sock", 00:19:42.932 "config": [ 00:19:42.932 { 00:19:42.932 "method": "sock_set_default_impl", 00:19:42.932 "params": { 00:19:42.932 "impl_name": "posix" 00:19:42.932 } 00:19:42.932 }, 00:19:42.932 { 00:19:42.932 "method": "sock_impl_set_options", 00:19:42.932 "params": { 00:19:42.932 "impl_name": "ssl", 00:19:42.932 "recv_buf_size": 4096, 00:19:42.932 "send_buf_size": 4096, 00:19:42.932 "enable_recv_pipe": true, 00:19:42.932 "enable_quickack": false, 00:19:42.932 "enable_placement_id": 0, 00:19:42.932 "enable_zerocopy_send_server": true, 00:19:42.932 "enable_zerocopy_send_client": false, 00:19:42.932 "zerocopy_threshold": 0, 00:19:42.932 "tls_version": 0, 00:19:42.932 "enable_ktls": false 00:19:42.932 } 00:19:42.932 }, 00:19:42.932 { 00:19:42.932 "method": "sock_impl_set_options", 00:19:42.932 "params": { 00:19:42.932 "impl_name": "posix", 00:19:42.932 "recv_buf_size": 2097152, 00:19:42.932 "send_buf_size": 2097152, 00:19:42.932 "enable_recv_pipe": true, 00:19:42.932 "enable_quickack": false, 00:19:42.932 "enable_placement_id": 0, 00:19:42.932 "enable_zerocopy_send_server": true, 00:19:42.932 "enable_zerocopy_send_client": false, 00:19:42.932 "zerocopy_threshold": 0, 00:19:42.932 "tls_version": 0, 00:19:42.932 "enable_ktls": false 00:19:42.932 } 00:19:42.932 } 00:19:42.932 ] 00:19:42.932 }, 00:19:42.932 { 00:19:42.932 "subsystem": "vmd", 00:19:42.932 "config": [] 00:19:42.932 }, 00:19:42.932 { 00:19:42.932 "subsystem": "accel", 00:19:42.932 "config": [ 00:19:42.932 { 00:19:42.932 "method": "accel_set_options", 00:19:42.932 "params": { 00:19:42.932 "small_cache_size": 128, 00:19:42.932 "large_cache_size": 16, 00:19:42.932 "task_count": 2048, 00:19:42.932 "sequence_count": 2048, 00:19:42.932 "buf_count": 2048 00:19:42.932 } 00:19:42.932 } 00:19:42.932 ] 00:19:42.932 }, 00:19:42.932 { 00:19:42.932 "subsystem": "bdev", 00:19:42.932 "config": [ 00:19:42.932 { 00:19:42.932 "method": "bdev_set_options", 00:19:42.932 "params": { 00:19:42.932 "bdev_io_pool_size": 65535, 00:19:42.932 "bdev_io_cache_size": 256, 00:19:42.932 "bdev_auto_examine": true, 00:19:42.932 "iobuf_small_cache_size": 128, 00:19:42.932 "iobuf_large_cache_size": 16 00:19:42.932 } 00:19:42.932 }, 00:19:42.932 { 00:19:42.932 "method": "bdev_raid_set_options", 00:19:42.932 "params": { 00:19:42.932 "process_window_size_kb": 1024, 00:19:42.932 "process_max_bandwidth_mb_sec": 0 00:19:42.932 } 00:19:42.932 }, 00:19:42.932 { 00:19:42.932 "method": "bdev_iscsi_set_options", 00:19:42.932 "params": { 00:19:42.932 "timeout_sec": 30 00:19:42.932 } 00:19:42.932 }, 00:19:42.932 { 00:19:42.932 "method": "bdev_nvme_set_options", 00:19:42.932 "params": { 00:19:42.932 "action_on_timeout": "none", 00:19:42.932 "timeout_us": 0, 00:19:42.932 "timeout_admin_us": 0, 00:19:42.932 "keep_alive_timeout_ms": 10000, 00:19:42.933 "arbitration_burst": 0, 00:19:42.933 "low_priority_weight": 0, 00:19:42.933 "medium_priority_weight": 0, 00:19:42.933 "high_priority_weight": 0, 00:19:42.933 "nvme_adminq_poll_period_us": 10000, 00:19:42.933 "nvme_ioq_poll_period_us": 0, 00:19:42.933 "io_queue_requests": 0, 00:19:42.933 "delay_cmd_submit": true, 00:19:42.933 "transport_retry_count": 4, 00:19:42.933 "bdev_retry_count": 3, 00:19:42.933 "transport_ack_timeout": 0, 00:19:42.933 "ctrlr_loss_timeout_sec": 0, 00:19:42.933 "reconnect_delay_sec": 0, 00:19:42.933 "fast_io_fail_timeout_sec": 0, 00:19:42.933 "disable_auto_failback": false, 00:19:42.933 "generate_uuids": false, 00:19:42.933 "transport_tos": 0, 00:19:42.933 "nvme_error_stat": false, 00:19:42.933 "rdma_srq_size": 0, 00:19:42.933 "io_path_stat": false, 00:19:42.933 "allow_accel_sequence": false, 00:19:42.933 "rdma_max_cq_size": 0, 00:19:42.933 "rdma_cm_event_timeout_ms": 0, 00:19:42.933 "dhchap_digests": [ 00:19:42.933 "sha256", 00:19:42.933 "sha384", 00:19:42.933 "sha512" 00:19:42.933 ], 00:19:42.933 "dhchap_dhgroups": [ 00:19:42.933 "null", 00:19:42.933 "ffdhe2048", 00:19:42.933 "ffdhe3072", 00:19:42.933 "ffdhe4096", 00:19:42.933 "ffdhe6144", 00:19:42.933 "ffdhe8192" 00:19:42.933 ] 00:19:42.933 } 00:19:42.933 }, 00:19:42.933 { 00:19:42.933 "method": "bdev_nvme_set_hotplug", 00:19:42.933 "params": { 00:19:42.933 "period_us": 100000, 00:19:42.933 "enable": false 00:19:42.933 } 00:19:42.933 }, 00:19:42.933 { 00:19:42.933 "method": "bdev_malloc_create", 00:19:42.933 "params": { 00:19:42.933 "name": "malloc0", 00:19:42.933 "num_blocks": 8192, 00:19:42.933 "block_size": 4096, 00:19:42.933 "physical_block_size": 4096, 00:19:42.933 "uuid": "5731b7fc-86a5-4663-8d78-6d880249029d", 00:19:42.933 "optimal_io_boundary": 0, 00:19:42.933 "md_size": 0, 00:19:42.933 "dif_type": 0, 00:19:42.933 "dif_is_head_of_md": false, 00:19:42.933 "dif_pi_format": 0 00:19:42.933 } 00:19:42.933 }, 00:19:42.933 { 00:19:42.933 "method": "bdev_wait_for_examine" 00:19:42.933 } 00:19:42.933 ] 00:19:42.933 }, 00:19:42.933 { 00:19:42.933 "subsystem": "nbd", 00:19:42.933 "config": [] 00:19:42.933 }, 00:19:42.933 { 00:19:42.933 "subsystem": "scheduler", 00:19:42.933 "config": [ 00:19:42.933 { 00:19:42.933 "method": "framework_set_scheduler", 00:19:42.933 "params": { 00:19:42.933 "name": "static" 00:19:42.933 } 00:19:42.933 } 00:19:42.933 ] 00:19:42.933 }, 00:19:42.933 { 00:19:42.933 "subsystem": "nvmf", 00:19:42.933 "config": [ 00:19:42.933 { 00:19:42.933 "method": "nvmf_set_config", 00:19:42.933 "params": { 00:19:42.933 "discovery_filter": "match_any", 00:19:42.933 "admin_cmd_passthru": { 00:19:42.933 "identify_ctrlr": false 00:19:42.933 }, 00:19:42.933 "dhchap_digests": [ 00:19:42.933 "sha256", 00:19:42.933 "sha384", 00:19:42.933 "sha512" 00:19:42.933 ], 00:19:42.933 "dhchap_dhgroups": [ 00:19:42.933 "null", 00:19:42.933 "ffdhe2048", 00:19:42.933 "ffdhe3072", 00:19:42.933 "ffdhe4096", 00:19:42.933 "ffdhe6144", 00:19:42.933 "ffdhe8192" 00:19:42.933 ] 00:19:42.933 } 00:19:42.933 }, 00:19:42.933 { 00:19:42.933 "method": "nvmf_set_max_subsystems", 00:19:42.933 "params": { 00:19:42.933 "max_subsystems": 1024 00:19:42.933 } 00:19:42.933 }, 00:19:42.933 { 00:19:42.933 "method": "nvmf_set_crdt", 00:19:42.933 "params": { 00:19:42.933 "crdt1": 0, 00:19:42.933 "crdt2": 0, 00:19:42.933 "crdt3": 0 00:19:42.933 } 00:19:42.933 }, 00:19:42.933 { 00:19:42.933 "method": "nvmf_create_transport", 00:19:42.933 "params": { 00:19:42.933 "trtype": "TCP", 00:19:42.933 "max_queue_depth": 128, 00:19:42.933 "max_io_qpairs_per_ctrlr": 127, 00:19:42.933 "in_capsule_data_size": 4096, 00:19:42.933 "max_io_size": 131072, 00:19:42.933 "io_unit_size": 131072, 00:19:42.933 "max_aq_depth": 128, 00:19:42.933 "num_shared_buffers": 511, 00:19:42.933 "buf_cache_size": 4294967295, 00:19:42.933 "dif_insert_or_strip": false, 00:19:42.933 "zcopy": false, 00:19:42.933 "c2h_success": false, 00:19:42.933 "sock_priority": 0, 00:19:42.933 "abort_timeout_sec": 1, 00:19:42.933 "ack_timeout": 0, 00:19:42.933 "data_wr_pool_size": 0 00:19:42.933 } 00:19:42.933 }, 00:19:42.933 { 00:19:42.933 "method": "nvmf_create_subsystem", 00:19:42.933 "params": { 00:19:42.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.933 "allow_any_host": false, 00:19:42.933 "serial_number": "00000000000000000000", 00:19:42.933 "model_number": "SPDK bdev Controller", 00:19:42.933 "max_namespaces": 32, 00:19:42.933 "min_cntlid": 1, 00:19:42.933 "max_cntlid": 65519, 00:19:42.933 "ana_reporting": false 00:19:42.933 } 00:19:42.933 }, 00:19:42.933 { 00:19:42.933 "method": "nvmf_subsystem_add_host", 00:19:42.933 "params": { 00:19:42.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.933 "host": "nqn.2016-06.io.spdk:host1", 00:19:42.933 "psk": "key0" 00:19:42.933 } 00:19:42.933 }, 00:19:42.933 { 00:19:42.933 "method": "nvmf_subsystem_add_ns", 00:19:42.933 "params": { 00:19:42.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.933 "namespace": { 00:19:42.933 "nsid": 1, 00:19:42.933 "bdev_name": "malloc0", 00:19:42.933 "nguid": "5731B7FC86A546638D786D880249029D", 00:19:42.933 "uuid": "5731b7fc-86a5-4663-8d78-6d880249029d", 00:19:42.933 "no_auto_visible": false 00:19:42.933 } 00:19:42.933 } 00:19:42.933 }, 00:19:42.933 { 00:19:42.933 "method": "nvmf_subsystem_add_listener", 00:19:42.933 "params": { 00:19:42.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.933 "listen_address": { 00:19:42.933 "trtype": "TCP", 00:19:42.933 "adrfam": "IPv4", 00:19:42.933 "traddr": "10.0.0.2", 00:19:42.933 "trsvcid": "4420" 00:19:42.933 }, 00:19:42.933 "secure_channel": false, 00:19:42.933 "sock_impl": "ssl" 00:19:42.933 } 00:19:42.933 } 00:19:42.933 ] 00:19:42.933 } 00:19:42.933 ] 00:19:42.933 }' 00:19:42.933 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:42.933 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:42.933 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.933 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=3989283 00:19:42.933 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 3989283 00:19:42.933 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:42.933 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3989283 ']' 00:19:42.933 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.933 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.933 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.933 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.933 15:16:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.933 [2024-10-01 15:16:52.599242] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:42.933 [2024-10-01 15:16:52.599300] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.933 [2024-10-01 15:16:52.664158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.933 [2024-10-01 15:16:52.728718] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.933 [2024-10-01 15:16:52.728756] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.933 [2024-10-01 15:16:52.728764] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.933 [2024-10-01 15:16:52.728770] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.933 [2024-10-01 15:16:52.728776] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.933 [2024-10-01 15:16:52.728825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.193 [2024-10-01 15:16:52.938365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.193 [2024-10-01 15:16:52.970376] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:43.193 [2024-10-01 15:16:52.970583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3989428 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3989428 /var/tmp/bdevperf.sock 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3989428 ']' 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.764 15:16:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:43.764 "subsystems": [ 00:19:43.764 { 00:19:43.764 "subsystem": "keyring", 00:19:43.764 "config": [ 00:19:43.765 { 00:19:43.765 "method": "keyring_file_add_key", 00:19:43.765 "params": { 00:19:43.765 "name": "key0", 00:19:43.765 "path": "/tmp/tmp.756kRTrJb0" 00:19:43.765 } 00:19:43.765 } 00:19:43.765 ] 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "subsystem": "iobuf", 00:19:43.765 "config": [ 00:19:43.765 { 00:19:43.765 "method": "iobuf_set_options", 00:19:43.765 "params": { 00:19:43.765 "small_pool_count": 8192, 00:19:43.765 "large_pool_count": 1024, 00:19:43.765 "small_bufsize": 8192, 00:19:43.765 "large_bufsize": 135168 00:19:43.765 } 00:19:43.765 } 00:19:43.765 ] 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "subsystem": "sock", 00:19:43.765 "config": [ 00:19:43.765 { 00:19:43.765 "method": "sock_set_default_impl", 00:19:43.765 "params": { 00:19:43.765 "impl_name": "posix" 00:19:43.765 } 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "method": "sock_impl_set_options", 00:19:43.765 "params": { 00:19:43.765 "impl_name": "ssl", 00:19:43.765 "recv_buf_size": 4096, 00:19:43.765 "send_buf_size": 4096, 00:19:43.765 "enable_recv_pipe": true, 00:19:43.765 "enable_quickack": false, 00:19:43.765 "enable_placement_id": 0, 00:19:43.765 "enable_zerocopy_send_server": true, 00:19:43.765 "enable_zerocopy_send_client": false, 00:19:43.765 "zerocopy_threshold": 0, 00:19:43.765 "tls_version": 0, 00:19:43.765 "enable_ktls": false 00:19:43.765 } 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "method": "sock_impl_set_options", 00:19:43.765 "params": { 00:19:43.765 "impl_name": "posix", 00:19:43.765 "recv_buf_size": 2097152, 00:19:43.765 "send_buf_size": 2097152, 00:19:43.765 "enable_recv_pipe": true, 00:19:43.765 "enable_quickack": false, 00:19:43.765 "enable_placement_id": 0, 00:19:43.765 "enable_zerocopy_send_server": true, 00:19:43.765 "enable_zerocopy_send_client": false, 00:19:43.765 "zerocopy_threshold": 0, 00:19:43.765 "tls_version": 0, 00:19:43.765 "enable_ktls": false 00:19:43.765 } 00:19:43.765 } 00:19:43.765 ] 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "subsystem": "vmd", 00:19:43.765 "config": [] 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "subsystem": "accel", 00:19:43.765 "config": [ 00:19:43.765 { 00:19:43.765 "method": "accel_set_options", 00:19:43.765 "params": { 00:19:43.765 "small_cache_size": 128, 00:19:43.765 "large_cache_size": 16, 00:19:43.765 "task_count": 2048, 00:19:43.765 "sequence_count": 2048, 00:19:43.765 "buf_count": 2048 00:19:43.765 } 00:19:43.765 } 00:19:43.765 ] 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "subsystem": "bdev", 00:19:43.765 "config": [ 00:19:43.765 { 00:19:43.765 "method": "bdev_set_options", 00:19:43.765 "params": { 00:19:43.765 "bdev_io_pool_size": 65535, 00:19:43.765 "bdev_io_cache_size": 256, 00:19:43.765 "bdev_auto_examine": true, 00:19:43.765 "iobuf_small_cache_size": 128, 00:19:43.765 "iobuf_large_cache_size": 16 00:19:43.765 } 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "method": "bdev_raid_set_options", 00:19:43.765 "params": { 00:19:43.765 "process_window_size_kb": 1024, 00:19:43.765 "process_max_bandwidth_mb_sec": 0 00:19:43.765 } 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "method": "bdev_iscsi_set_options", 00:19:43.765 "params": { 00:19:43.765 "timeout_sec": 30 00:19:43.765 } 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "method": "bdev_nvme_set_options", 00:19:43.765 "params": { 00:19:43.765 "action_on_timeout": "none", 00:19:43.765 "timeout_us": 0, 00:19:43.765 "timeout_admin_us": 0, 00:19:43.765 "keep_alive_timeout_ms": 10000, 00:19:43.765 "arbitration_burst": 0, 00:19:43.765 "low_priority_weight": 0, 00:19:43.765 "medium_priority_weight": 0, 00:19:43.765 "high_priority_weight": 0, 00:19:43.765 "nvme_adminq_poll_period_us": 10000, 00:19:43.765 "nvme_ioq_poll_period_us": 0, 00:19:43.765 "io_queue_requests": 512, 00:19:43.765 "delay_cmd_submit": true, 00:19:43.765 "transport_retry_count": 4, 00:19:43.765 "bdev_retry_count": 3, 00:19:43.765 "transport_ack_timeout": 0, 00:19:43.765 "ctrlr_loss_timeout_sec": 0, 00:19:43.765 "reconnect_delay_sec": 0, 00:19:43.765 "fast_io_fail_timeout_sec": 0, 00:19:43.765 "disable_auto_failback": false, 00:19:43.765 "generate_uuids": false, 00:19:43.765 "transport_tos": 0, 00:19:43.765 "nvme_error_stat": false, 00:19:43.765 "rdma_srq_size": 0, 00:19:43.765 "io_path_stat": false, 00:19:43.765 "allow_accel_sequence": false, 00:19:43.765 "rdma_max_cq_size": 0, 00:19:43.765 "rdma_cm_event_timeout_ms": 0, 00:19:43.765 "dhchap_digests": [ 00:19:43.765 "sha256", 00:19:43.765 "sha384", 00:19:43.765 "sha512" 00:19:43.765 ], 00:19:43.765 "dhchap_dhgroups": [ 00:19:43.765 "null", 00:19:43.765 "ffdhe2048", 00:19:43.765 "ffdhe3072", 00:19:43.765 "ffdhe4096", 00:19:43.765 "ffdhe6144", 00:19:43.765 "ffdhe8192" 00:19:43.765 ] 00:19:43.765 } 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "method": "bdev_nvme_attach_controller", 00:19:43.765 "params": { 00:19:43.765 "name": "nvme0", 00:19:43.765 "trtype": "TCP", 00:19:43.765 "adrfam": "IPv4", 00:19:43.765 "traddr": "10.0.0.2", 00:19:43.765 "trsvcid": "4420", 00:19:43.765 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.765 "prchk_reftag": false, 00:19:43.765 "prchk_guard": false, 00:19:43.765 "ctrlr_loss_timeout_sec": 0, 00:19:43.765 "reconnect_delay_sec": 0, 00:19:43.765 "fast_io_fail_timeout_sec": 0, 00:19:43.765 "psk": "key0", 00:19:43.765 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.765 "hdgst": false, 00:19:43.765 "ddgst": false 00:19:43.765 } 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "method": "bdev_nvme_set_hotplug", 00:19:43.765 "params": { 00:19:43.765 "period_us": 100000, 00:19:43.765 "enable": false 00:19:43.765 } 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "method": "bdev_enable_histogram", 00:19:43.765 "params": { 00:19:43.765 "name": "nvme0n1", 00:19:43.765 "enable": true 00:19:43.765 } 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "method": "bdev_wait_for_examine" 00:19:43.765 } 00:19:43.765 ] 00:19:43.765 }, 00:19:43.765 { 00:19:43.765 "subsystem": "nbd", 00:19:43.765 "config": [] 00:19:43.765 } 00:19:43.765 ] 00:19:43.765 }' 00:19:43.765 [2024-10-01 15:16:53.499317] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:43.765 [2024-10-01 15:16:53.499372] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3989428 ] 00:19:43.765 [2024-10-01 15:16:53.552697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.765 [2024-10-01 15:16:53.606694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.026 [2024-10-01 15:16:53.742726] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:44.596 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.596 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:44.596 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:44.596 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:44.596 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.596 15:16:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:44.856 Running I/O for 1 seconds... 00:19:45.798 3928.00 IOPS, 15.34 MiB/s 00:19:45.798 Latency(us) 00:19:45.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.798 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:45.798 Verification LBA range: start 0x0 length 0x2000 00:19:45.798 nvme0n1 : 1.02 3995.39 15.61 0.00 0.00 31793.34 5324.80 34952.53 00:19:45.798 =================================================================================================================== 00:19:45.798 Total : 3995.39 15.61 0.00 0.00 31793.34 5324.80 34952.53 00:19:45.798 { 00:19:45.798 "results": [ 00:19:45.798 { 00:19:45.798 "job": "nvme0n1", 00:19:45.798 "core_mask": "0x2", 00:19:45.798 "workload": "verify", 00:19:45.798 "status": "finished", 00:19:45.798 "verify_range": { 00:19:45.798 "start": 0, 00:19:45.798 "length": 8192 00:19:45.798 }, 00:19:45.798 "queue_depth": 128, 00:19:45.798 "io_size": 4096, 00:19:45.798 "runtime": 1.01517, 00:19:45.798 "iops": 3995.3899346907415, 00:19:45.798 "mibps": 15.606991932385709, 00:19:45.798 "io_failed": 0, 00:19:45.798 "io_timeout": 0, 00:19:45.798 "avg_latency_us": 31793.341854043392, 00:19:45.798 "min_latency_us": 5324.8, 00:19:45.798 "max_latency_us": 34952.53333333333 00:19:45.798 } 00:19:45.798 ], 00:19:45.798 "core_count": 1 00:19:45.798 } 00:19:45.798 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:45.798 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:45.798 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:45.798 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:19:45.798 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:19:45.798 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:45.798 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:45.798 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:45.798 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:45.798 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:45.798 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:45.798 nvmf_trace.0 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3989428 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3989428 ']' 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3989428 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3989428 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3989428' 00:19:46.059 killing process with pid 3989428 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3989428 00:19:46.059 Received shutdown signal, test time was about 1.000000 seconds 00:19:46.059 00:19:46.059 Latency(us) 00:19:46.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.059 =================================================================================================================== 00:19:46.059 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3989428 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:46.059 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:46.059 rmmod nvme_tcp 00:19:46.059 rmmod nvme_fabrics 00:19:46.059 rmmod nvme_keyring 00:19:46.320 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:46.320 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:46.320 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:46.320 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 3989283 ']' 00:19:46.320 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 3989283 00:19:46.320 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3989283 ']' 00:19:46.320 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3989283 00:19:46.320 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:46.320 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.320 15:16:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3989283 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3989283' 00:19:46.320 killing process with pid 3989283 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3989283 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3989283 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.320 15:16:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.LIobnjpGSd /tmp/tmp.l7P9atyKUb /tmp/tmp.756kRTrJb0 00:19:48.865 00:19:48.865 real 1m26.015s 00:19:48.865 user 2m14.215s 00:19:48.865 sys 0m27.109s 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.865 ************************************ 00:19:48.865 END TEST nvmf_tls 00:19:48.865 ************************************ 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:48.865 ************************************ 00:19:48.865 START TEST nvmf_fips 00:19:48.865 ************************************ 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:48.865 * Looking for test storage... 00:19:48.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:48.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.865 --rc genhtml_branch_coverage=1 00:19:48.865 --rc genhtml_function_coverage=1 00:19:48.865 --rc genhtml_legend=1 00:19:48.865 --rc geninfo_all_blocks=1 00:19:48.865 --rc geninfo_unexecuted_blocks=1 00:19:48.865 00:19:48.865 ' 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:48.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.865 --rc genhtml_branch_coverage=1 00:19:48.865 --rc genhtml_function_coverage=1 00:19:48.865 --rc genhtml_legend=1 00:19:48.865 --rc geninfo_all_blocks=1 00:19:48.865 --rc geninfo_unexecuted_blocks=1 00:19:48.865 00:19:48.865 ' 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:48.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.865 --rc genhtml_branch_coverage=1 00:19:48.865 --rc genhtml_function_coverage=1 00:19:48.865 --rc genhtml_legend=1 00:19:48.865 --rc geninfo_all_blocks=1 00:19:48.865 --rc geninfo_unexecuted_blocks=1 00:19:48.865 00:19:48.865 ' 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:48.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.865 --rc genhtml_branch_coverage=1 00:19:48.865 --rc genhtml_function_coverage=1 00:19:48.865 --rc genhtml_legend=1 00:19:48.865 --rc geninfo_all_blocks=1 00:19:48.865 --rc geninfo_unexecuted_blocks=1 00:19:48.865 00:19:48.865 ' 00:19:48.865 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:48.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:48.866 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:48.867 Error setting digest 00:19:48.867 4052CC27547F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:48.867 4052CC27547F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:48.867 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:49.127 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:49.127 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.127 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.127 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.127 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:19:49.127 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:19:49.127 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:49.127 15:16:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:57.269 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:57.269 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:57.269 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ up == up ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:57.269 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # is_hw=yes 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:57.269 15:17:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:57.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:19:57.269 00:19:57.269 --- 10.0.0.2 ping statistics --- 00:19:57.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.269 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:57.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:19:57.269 00:19:57.269 --- 10.0.0.1 ping statistics --- 00:19:57.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.269 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # return 0 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=3994132 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 3994132 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3994132 ']' 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:57.269 [2024-10-01 15:17:06.204352] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:57.269 [2024-10-01 15:17:06.204426] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.269 [2024-10-01 15:17:06.292814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.269 [2024-10-01 15:17:06.387140] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.269 [2024-10-01 15:17:06.387208] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.269 [2024-10-01 15:17:06.387217] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.269 [2024-10-01 15:17:06.387224] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.269 [2024-10-01 15:17:06.387230] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.269 [2024-10-01 15:17:06.387257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:57.269 15:17:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:57.269 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.269 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:57.269 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:57.270 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:57.270 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.HSW 00:19:57.270 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:57.270 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.HSW 00:19:57.270 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.HSW 00:19:57.270 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.HSW 00:19:57.270 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:57.530 [2024-10-01 15:17:07.219521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.530 [2024-10-01 15:17:07.235530] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.530 [2024-10-01 15:17:07.235817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.530 malloc0 00:19:57.530 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.530 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3994483 00:19:57.530 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3994483 /var/tmp/bdevperf.sock 00:19:57.530 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.530 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3994483 ']' 00:19:57.530 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.530 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.530 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.530 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.530 15:17:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:57.790 [2024-10-01 15:17:07.399520] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:19:57.790 [2024-10-01 15:17:07.399593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3994483 ] 00:19:57.790 [2024-10-01 15:17:07.455294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.790 [2024-10-01 15:17:07.519037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.360 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.360 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:58.360 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.HSW 00:19:58.620 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:58.620 [2024-10-01 15:17:08.456770] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.880 TLSTESTn1 00:19:58.880 15:17:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:58.880 Running I/O for 10 seconds... 00:20:09.166 5466.00 IOPS, 21.35 MiB/s 5589.50 IOPS, 21.83 MiB/s 5698.67 IOPS, 22.26 MiB/s 5703.25 IOPS, 22.28 MiB/s 5654.00 IOPS, 22.09 MiB/s 5665.67 IOPS, 22.13 MiB/s 5695.86 IOPS, 22.25 MiB/s 5650.62 IOPS, 22.07 MiB/s 5612.89 IOPS, 21.93 MiB/s 5581.00 IOPS, 21.80 MiB/s 00:20:09.166 Latency(us) 00:20:09.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.166 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:09.166 Verification LBA range: start 0x0 length 0x2000 00:20:09.166 TLSTESTn1 : 10.02 5582.06 21.80 0.00 0.00 22891.93 5543.25 43909.12 00:20:09.166 =================================================================================================================== 00:20:09.166 Total : 5582.06 21.80 0.00 0.00 22891.93 5543.25 43909.12 00:20:09.166 { 00:20:09.166 "results": [ 00:20:09.166 { 00:20:09.166 "job": "TLSTESTn1", 00:20:09.166 "core_mask": "0x4", 00:20:09.166 "workload": "verify", 00:20:09.166 "status": "finished", 00:20:09.166 "verify_range": { 00:20:09.166 "start": 0, 00:20:09.166 "length": 8192 00:20:09.166 }, 00:20:09.166 "queue_depth": 128, 00:20:09.166 "io_size": 4096, 00:20:09.166 "runtime": 10.021025, 00:20:09.166 "iops": 5582.063711047523, 00:20:09.166 "mibps": 21.804936371279386, 00:20:09.166 "io_failed": 0, 00:20:09.166 "io_timeout": 0, 00:20:09.166 "avg_latency_us": 22891.92633177208, 00:20:09.166 "min_latency_us": 5543.253333333333, 00:20:09.166 "max_latency_us": 43909.12 00:20:09.166 } 00:20:09.166 ], 00:20:09.166 "core_count": 1 00:20:09.166 } 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:09.166 nvmf_trace.0 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3994483 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3994483 ']' 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3994483 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3994483 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3994483' 00:20:09.166 killing process with pid 3994483 00:20:09.166 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3994483 00:20:09.166 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.166 00:20:09.166 Latency(us) 00:20:09.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.167 =================================================================================================================== 00:20:09.167 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.167 15:17:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3994483 00:20:09.167 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:09.167 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:09.167 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:09.167 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:09.167 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:09.167 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:09.167 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:09.167 rmmod nvme_tcp 00:20:09.427 rmmod nvme_fabrics 00:20:09.427 rmmod nvme_keyring 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 3994132 ']' 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 3994132 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3994132 ']' 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3994132 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3994132 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3994132' 00:20:09.427 killing process with pid 3994132 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3994132 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3994132 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.427 15:17:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.997 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:11.997 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.HSW 00:20:11.997 00:20:11.997 real 0m23.040s 00:20:11.997 user 0m24.027s 00:20:11.997 sys 0m10.199s 00:20:11.997 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:11.997 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:11.997 ************************************ 00:20:11.997 END TEST nvmf_fips 00:20:11.997 ************************************ 00:20:11.997 15:17:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:11.997 15:17:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:11.997 15:17:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:11.997 15:17:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:11.997 ************************************ 00:20:11.997 START TEST nvmf_control_msg_list 00:20:11.997 ************************************ 00:20:11.997 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:11.997 * Looking for test storage... 00:20:11.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:11.997 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:11.997 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:20:11.997 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:11.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.998 --rc genhtml_branch_coverage=1 00:20:11.998 --rc genhtml_function_coverage=1 00:20:11.998 --rc genhtml_legend=1 00:20:11.998 --rc geninfo_all_blocks=1 00:20:11.998 --rc geninfo_unexecuted_blocks=1 00:20:11.998 00:20:11.998 ' 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:11.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.998 --rc genhtml_branch_coverage=1 00:20:11.998 --rc genhtml_function_coverage=1 00:20:11.998 --rc genhtml_legend=1 00:20:11.998 --rc geninfo_all_blocks=1 00:20:11.998 --rc geninfo_unexecuted_blocks=1 00:20:11.998 00:20:11.998 ' 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:11.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.998 --rc genhtml_branch_coverage=1 00:20:11.998 --rc genhtml_function_coverage=1 00:20:11.998 --rc genhtml_legend=1 00:20:11.998 --rc geninfo_all_blocks=1 00:20:11.998 --rc geninfo_unexecuted_blocks=1 00:20:11.998 00:20:11.998 ' 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:11.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.998 --rc genhtml_branch_coverage=1 00:20:11.998 --rc genhtml_function_coverage=1 00:20:11.998 --rc genhtml_legend=1 00:20:11.998 --rc geninfo_all_blocks=1 00:20:11.998 --rc geninfo_unexecuted_blocks=1 00:20:11.998 00:20:11.998 ' 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.998 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:11.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:11.999 15:17:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:20.139 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.139 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:20.139 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:20.139 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:20.139 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:20.139 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:20.139 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:20.139 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:20.139 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:20.139 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:20.139 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:20.139 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:20.140 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:20.140 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:20.140 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:20.140 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # is_hw=yes 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:20.140 15:17:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:20.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:20:20.140 00:20:20.140 --- 10.0.0.2 ping statistics --- 00:20:20.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.140 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:20:20.140 00:20:20.140 --- 10.0.0.1 ping statistics --- 00:20:20.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.140 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # return 0 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:20.140 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=4000835 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 4000835 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 4000835 ']' 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:20.141 [2024-10-01 15:17:29.193110] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:20:20.141 [2024-10-01 15:17:29.193183] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.141 [2024-10-01 15:17:29.264129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.141 [2024-10-01 15:17:29.339777] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.141 [2024-10-01 15:17:29.339815] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.141 [2024-10-01 15:17:29.339823] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.141 [2024-10-01 15:17:29.339830] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.141 [2024-10-01 15:17:29.339836] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.141 [2024-10-01 15:17:29.339854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:20.141 15:17:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:20.402 [2024-10-01 15:17:30.028541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:20.402 Malloc0 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:20.402 [2024-10-01 15:17:30.091509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=4001183 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=4001184 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=4001185 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 4001183 00:20:20.402 15:17:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:20.402 [2024-10-01 15:17:30.151929] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:20.402 [2024-10-01 15:17:30.172019] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:20.402 [2024-10-01 15:17:30.172298] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:21.786 Initializing NVMe Controllers 00:20:21.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:21.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:21.786 Initialization complete. Launching workers. 00:20:21.786 ======================================================== 00:20:21.786 Latency(us) 00:20:21.786 Device Information : IOPS MiB/s Average min max 00:20:21.786 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1557.00 6.08 641.98 285.42 804.05 00:20:21.786 ======================================================== 00:20:21.786 Total : 1557.00 6.08 641.98 285.42 804.05 00:20:21.786 00:20:21.786 Initializing NVMe Controllers 00:20:21.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:21.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:21.786 Initialization complete. Launching workers. 00:20:21.786 ======================================================== 00:20:21.786 Latency(us) 00:20:21.786 Device Information : IOPS MiB/s Average min max 00:20:21.786 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40896.35 40743.38 40970.81 00:20:21.786 ======================================================== 00:20:21.786 Total : 25.00 0.10 40896.35 40743.38 40970.81 00:20:21.786 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 4001184 00:20:21.786 Initializing NVMe Controllers 00:20:21.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:21.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:21.786 Initialization complete. Launching workers. 00:20:21.786 ======================================================== 00:20:21.786 Latency(us) 00:20:21.786 Device Information : IOPS MiB/s Average min max 00:20:21.786 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40898.50 40728.93 41070.87 00:20:21.786 ======================================================== 00:20:21.786 Total : 25.00 0.10 40898.50 40728.93 41070.87 00:20:21.786 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 4001185 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:21.786 rmmod nvme_tcp 00:20:21.786 rmmod nvme_fabrics 00:20:21.786 rmmod nvme_keyring 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 4000835 ']' 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 4000835 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 4000835 ']' 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 4000835 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4000835 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:21.786 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4000835' 00:20:21.787 killing process with pid 4000835 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 4000835 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 4000835 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.787 15:17:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.331 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:24.332 00:20:24.332 real 0m12.228s 00:20:24.332 user 0m7.689s 00:20:24.332 sys 0m6.432s 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:24.332 ************************************ 00:20:24.332 END TEST nvmf_control_msg_list 00:20:24.332 ************************************ 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:24.332 ************************************ 00:20:24.332 START TEST nvmf_wait_for_buf 00:20:24.332 ************************************ 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:24.332 * Looking for test storage... 00:20:24.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:24.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.332 --rc genhtml_branch_coverage=1 00:20:24.332 --rc genhtml_function_coverage=1 00:20:24.332 --rc genhtml_legend=1 00:20:24.332 --rc geninfo_all_blocks=1 00:20:24.332 --rc geninfo_unexecuted_blocks=1 00:20:24.332 00:20:24.332 ' 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:24.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.332 --rc genhtml_branch_coverage=1 00:20:24.332 --rc genhtml_function_coverage=1 00:20:24.332 --rc genhtml_legend=1 00:20:24.332 --rc geninfo_all_blocks=1 00:20:24.332 --rc geninfo_unexecuted_blocks=1 00:20:24.332 00:20:24.332 ' 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:24.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.332 --rc genhtml_branch_coverage=1 00:20:24.332 --rc genhtml_function_coverage=1 00:20:24.332 --rc genhtml_legend=1 00:20:24.332 --rc geninfo_all_blocks=1 00:20:24.332 --rc geninfo_unexecuted_blocks=1 00:20:24.332 00:20:24.332 ' 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:24.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.332 --rc genhtml_branch_coverage=1 00:20:24.332 --rc genhtml_function_coverage=1 00:20:24.332 --rc genhtml_legend=1 00:20:24.332 --rc geninfo_all_blocks=1 00:20:24.332 --rc geninfo_unexecuted_blocks=1 00:20:24.332 00:20:24.332 ' 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.332 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:24.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:24.333 15:17:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:32.469 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:32.469 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:32.469 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:32.469 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:32.470 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # is_hw=yes 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:32.470 15:17:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:32.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:20:32.470 00:20:32.470 --- 10.0.0.2 ping statistics --- 00:20:32.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.470 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:32.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:20:32.470 00:20:32.470 --- 10.0.0.1 ping statistics --- 00:20:32.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.470 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # return 0 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=4005524 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 4005524 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 4005524 ']' 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:32.470 15:17:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.470 [2024-10-01 15:17:41.399185] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:20:32.470 [2024-10-01 15:17:41.399254] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.470 [2024-10-01 15:17:41.466961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.470 [2024-10-01 15:17:41.530119] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.470 [2024-10-01 15:17:41.530156] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.470 [2024-10-01 15:17:41.530164] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.470 [2024-10-01 15:17:41.530170] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.470 [2024-10-01 15:17:41.530176] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.470 [2024-10-01 15:17:41.530195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.470 Malloc0 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.470 [2024-10-01 15:17:42.288462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.470 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.471 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:32.471 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.471 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.471 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.471 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:32.471 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.471 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.471 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.471 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:32.471 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.471 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:32.471 [2024-10-01 15:17:42.312623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.471 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.471 15:17:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:32.731 [2024-10-01 15:17:42.397092] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:34.112 Initializing NVMe Controllers 00:20:34.112 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:34.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:34.112 Initialization complete. Launching workers. 00:20:34.112 ======================================================== 00:20:34.112 Latency(us) 00:20:34.112 Device Information : IOPS MiB/s Average min max 00:20:34.112 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32264.50 7992.54 63852.14 00:20:34.112 ======================================================== 00:20:34.112 Total : 129.00 16.12 32264.50 7992.54 63852.14 00:20:34.112 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:34.112 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:34.112 rmmod nvme_tcp 00:20:34.112 rmmod nvme_fabrics 00:20:34.373 rmmod nvme_keyring 00:20:34.373 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:34.373 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:34.373 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:34.373 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 4005524 ']' 00:20:34.373 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 4005524 00:20:34.373 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 4005524 ']' 00:20:34.373 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 4005524 00:20:34.373 15:17:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4005524 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4005524' 00:20:34.373 killing process with pid 4005524 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 4005524 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 4005524 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.373 15:17:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.914 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:36.914 00:20:36.914 real 0m12.544s 00:20:36.914 user 0m4.975s 00:20:36.914 sys 0m6.093s 00:20:36.914 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:36.914 15:17:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:36.914 ************************************ 00:20:36.914 END TEST nvmf_wait_for_buf 00:20:36.914 ************************************ 00:20:36.914 15:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:36.914 15:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:36.914 15:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:36.914 15:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:36.914 15:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.914 15:17:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.493 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:43.494 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:43.494 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:43.494 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:43.494 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:43.494 ************************************ 00:20:43.494 START TEST nvmf_perf_adq 00:20:43.494 ************************************ 00:20:43.494 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:43.755 * Looking for test storage... 00:20:43.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.755 --rc genhtml_branch_coverage=1 00:20:43.755 --rc genhtml_function_coverage=1 00:20:43.755 --rc genhtml_legend=1 00:20:43.755 --rc geninfo_all_blocks=1 00:20:43.755 --rc geninfo_unexecuted_blocks=1 00:20:43.755 00:20:43.755 ' 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.755 --rc genhtml_branch_coverage=1 00:20:43.755 --rc genhtml_function_coverage=1 00:20:43.755 --rc genhtml_legend=1 00:20:43.755 --rc geninfo_all_blocks=1 00:20:43.755 --rc geninfo_unexecuted_blocks=1 00:20:43.755 00:20:43.755 ' 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.755 --rc genhtml_branch_coverage=1 00:20:43.755 --rc genhtml_function_coverage=1 00:20:43.755 --rc genhtml_legend=1 00:20:43.755 --rc geninfo_all_blocks=1 00:20:43.755 --rc geninfo_unexecuted_blocks=1 00:20:43.755 00:20:43.755 ' 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:43.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.755 --rc genhtml_branch_coverage=1 00:20:43.755 --rc genhtml_function_coverage=1 00:20:43.755 --rc genhtml_legend=1 00:20:43.755 --rc geninfo_all_blocks=1 00:20:43.755 --rc geninfo_unexecuted_blocks=1 00:20:43.755 00:20:43.755 ' 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.755 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:43.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:43.756 15:17:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:52.006 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:52.007 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:52.007 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:52.007 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:52.007 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:52.007 15:18:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:52.578 15:18:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:55.124 15:18:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:00.414 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:00.414 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:00.414 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:00.415 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:00.415 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:00.415 15:18:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:00.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:21:00.415 00:21:00.415 --- 10.0.0.2 ping statistics --- 00:21:00.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.415 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:21:00.415 00:21:00.415 --- 10.0.0.1 ping statistics --- 00:21:00.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.415 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=4016347 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 4016347 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 4016347 ']' 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:00.415 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.415 [2024-10-01 15:18:10.191018] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:21:00.415 [2024-10-01 15:18:10.191088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.415 [2024-10-01 15:18:10.264470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.676 [2024-10-01 15:18:10.341710] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.676 [2024-10-01 15:18:10.341750] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.676 [2024-10-01 15:18:10.341758] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.676 [2024-10-01 15:18:10.341765] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.676 [2024-10-01 15:18:10.341771] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.676 [2024-10-01 15:18:10.341941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.676 [2024-10-01 15:18:10.342050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.676 [2024-10-01 15:18:10.342571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.676 [2024-10-01 15:18:10.342570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:01.247 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:01.247 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:01.247 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:01.247 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:01.247 15:18:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.247 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.507 [2024-10-01 15:18:11.169398] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.507 Malloc1 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.507 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.507 [2024-10-01 15:18:11.228684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.508 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.508 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=4016690 00:21:01.508 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:01.508 15:18:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:03.422 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:03.422 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.422 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.422 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.422 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:03.422 "tick_rate": 2400000000, 00:21:03.422 "poll_groups": [ 00:21:03.422 { 00:21:03.422 "name": "nvmf_tgt_poll_group_000", 00:21:03.422 "admin_qpairs": 1, 00:21:03.422 "io_qpairs": 1, 00:21:03.422 "current_admin_qpairs": 1, 00:21:03.422 "current_io_qpairs": 1, 00:21:03.422 "pending_bdev_io": 0, 00:21:03.422 "completed_nvme_io": 19744, 00:21:03.422 "transports": [ 00:21:03.422 { 00:21:03.422 "trtype": "TCP" 00:21:03.422 } 00:21:03.422 ] 00:21:03.422 }, 00:21:03.422 { 00:21:03.422 "name": "nvmf_tgt_poll_group_001", 00:21:03.422 "admin_qpairs": 0, 00:21:03.422 "io_qpairs": 1, 00:21:03.422 "current_admin_qpairs": 0, 00:21:03.422 "current_io_qpairs": 1, 00:21:03.422 "pending_bdev_io": 0, 00:21:03.422 "completed_nvme_io": 28195, 00:21:03.422 "transports": [ 00:21:03.422 { 00:21:03.422 "trtype": "TCP" 00:21:03.422 } 00:21:03.422 ] 00:21:03.422 }, 00:21:03.422 { 00:21:03.422 "name": "nvmf_tgt_poll_group_002", 00:21:03.422 "admin_qpairs": 0, 00:21:03.422 "io_qpairs": 1, 00:21:03.422 "current_admin_qpairs": 0, 00:21:03.422 "current_io_qpairs": 1, 00:21:03.422 "pending_bdev_io": 0, 00:21:03.422 "completed_nvme_io": 19938, 00:21:03.422 "transports": [ 00:21:03.422 { 00:21:03.422 "trtype": "TCP" 00:21:03.422 } 00:21:03.422 ] 00:21:03.422 }, 00:21:03.422 { 00:21:03.422 "name": "nvmf_tgt_poll_group_003", 00:21:03.422 "admin_qpairs": 0, 00:21:03.422 "io_qpairs": 1, 00:21:03.422 "current_admin_qpairs": 0, 00:21:03.422 "current_io_qpairs": 1, 00:21:03.422 "pending_bdev_io": 0, 00:21:03.422 "completed_nvme_io": 20416, 00:21:03.422 "transports": [ 00:21:03.422 { 00:21:03.422 "trtype": "TCP" 00:21:03.422 } 00:21:03.422 ] 00:21:03.422 } 00:21:03.422 ] 00:21:03.422 }' 00:21:03.422 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:03.422 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:03.683 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:03.683 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:03.683 15:18:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 4016690 00:21:11.822 Initializing NVMe Controllers 00:21:11.822 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:11.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:11.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:11.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:11.822 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:11.822 Initialization complete. Launching workers. 00:21:11.822 ======================================================== 00:21:11.822 Latency(us) 00:21:11.822 Device Information : IOPS MiB/s Average min max 00:21:11.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11235.70 43.89 5695.94 1302.40 9337.39 00:21:11.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15008.83 58.63 4263.89 1213.25 8269.18 00:21:11.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13545.45 52.91 4737.31 1334.57 46238.64 00:21:11.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13447.16 52.53 4759.10 1201.32 11188.28 00:21:11.822 ======================================================== 00:21:11.822 Total : 53237.13 207.96 4811.66 1201.32 46238.64 00:21:11.822 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.822 rmmod nvme_tcp 00:21:11.822 rmmod nvme_fabrics 00:21:11.822 rmmod nvme_keyring 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 4016347 ']' 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 4016347 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 4016347 ']' 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 4016347 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4016347 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4016347' 00:21:11.822 killing process with pid 4016347 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 4016347 00:21:11.822 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 4016347 00:21:12.083 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:12.083 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:12.083 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:12.083 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:12.083 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:21:12.083 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:12.083 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:21:12.083 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:12.084 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:12.084 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.084 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.084 15:18:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.998 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:13.998 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:13.998 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:13.998 15:18:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:15.911 15:18:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:18.447 15:18:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:23.734 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:23.735 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:23.735 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:23.735 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:23.735 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # is_hw=yes 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.735 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:23.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:21:23.736 00:21:23.736 --- 10.0.0.2 ping statistics --- 00:21:23.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.736 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:21:23.736 00:21:23.736 --- 10.0.0.1 ping statistics --- 00:21:23.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.736 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # return 0 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:23.736 net.core.busy_poll = 1 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:23.736 net.core.busy_read = 1 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:23.736 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:23.996 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:23.996 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:23.996 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:23.996 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:23.996 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.996 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # nvmfpid=4021478 00:21:23.996 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # waitforlisten 4021478 00:21:23.996 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:23.996 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 4021478 ']' 00:21:23.996 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.996 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.996 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.996 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.997 15:18:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:23.997 [2024-10-01 15:18:33.812593] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:21:23.997 [2024-10-01 15:18:33.812649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.257 [2024-10-01 15:18:33.879381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.257 [2024-10-01 15:18:33.946864] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.257 [2024-10-01 15:18:33.946902] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.257 [2024-10-01 15:18:33.946909] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.257 [2024-10-01 15:18:33.946916] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.257 [2024-10-01 15:18:33.946922] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.257 [2024-10-01 15:18:33.947058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.257 [2024-10-01 15:18:33.947341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.257 [2024-10-01 15:18:33.947496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.257 [2024-10-01 15:18:33.947496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.828 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.828 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:24.828 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:24.828 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:24.828 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.828 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.828 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:24.828 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:24.828 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:24.828 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.828 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.828 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.089 [2024-10-01 15:18:34.796249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.089 Malloc1 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.089 [2024-10-01 15:18:34.855576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=4021690 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:25.089 15:18:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:27.633 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:27.633 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.633 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.633 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.633 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:27.633 "tick_rate": 2400000000, 00:21:27.633 "poll_groups": [ 00:21:27.633 { 00:21:27.633 "name": "nvmf_tgt_poll_group_000", 00:21:27.633 "admin_qpairs": 1, 00:21:27.633 "io_qpairs": 4, 00:21:27.633 "current_admin_qpairs": 1, 00:21:27.633 "current_io_qpairs": 4, 00:21:27.633 "pending_bdev_io": 0, 00:21:27.633 "completed_nvme_io": 34942, 00:21:27.633 "transports": [ 00:21:27.633 { 00:21:27.633 "trtype": "TCP" 00:21:27.633 } 00:21:27.633 ] 00:21:27.633 }, 00:21:27.633 { 00:21:27.633 "name": "nvmf_tgt_poll_group_001", 00:21:27.633 "admin_qpairs": 0, 00:21:27.633 "io_qpairs": 0, 00:21:27.633 "current_admin_qpairs": 0, 00:21:27.633 "current_io_qpairs": 0, 00:21:27.633 "pending_bdev_io": 0, 00:21:27.633 "completed_nvme_io": 0, 00:21:27.633 "transports": [ 00:21:27.633 { 00:21:27.633 "trtype": "TCP" 00:21:27.634 } 00:21:27.634 ] 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "name": "nvmf_tgt_poll_group_002", 00:21:27.634 "admin_qpairs": 0, 00:21:27.634 "io_qpairs": 0, 00:21:27.634 "current_admin_qpairs": 0, 00:21:27.634 "current_io_qpairs": 0, 00:21:27.634 "pending_bdev_io": 0, 00:21:27.634 "completed_nvme_io": 0, 00:21:27.634 "transports": [ 00:21:27.634 { 00:21:27.634 "trtype": "TCP" 00:21:27.634 } 00:21:27.634 ] 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "name": "nvmf_tgt_poll_group_003", 00:21:27.634 "admin_qpairs": 0, 00:21:27.634 "io_qpairs": 0, 00:21:27.634 "current_admin_qpairs": 0, 00:21:27.634 "current_io_qpairs": 0, 00:21:27.634 "pending_bdev_io": 0, 00:21:27.634 "completed_nvme_io": 0, 00:21:27.634 "transports": [ 00:21:27.634 { 00:21:27.634 "trtype": "TCP" 00:21:27.634 } 00:21:27.634 ] 00:21:27.634 } 00:21:27.634 ] 00:21:27.634 }' 00:21:27.634 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:27.634 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:27.634 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:21:27.634 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:21:27.634 15:18:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 4021690 00:21:35.767 Initializing NVMe Controllers 00:21:35.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:35.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:35.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:35.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:35.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:35.767 Initialization complete. Launching workers. 00:21:35.767 ======================================================== 00:21:35.767 Latency(us) 00:21:35.767 Device Information : IOPS MiB/s Average min max 00:21:35.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5997.30 23.43 10711.00 1396.02 60139.17 00:21:35.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6302.30 24.62 10153.86 1399.61 58454.13 00:21:35.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5366.30 20.96 11953.08 1398.99 57188.74 00:21:35.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6110.20 23.87 10473.84 1273.81 61278.87 00:21:35.767 ======================================================== 00:21:35.767 Total : 23776.10 92.88 10782.71 1273.81 61278.87 00:21:35.767 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:35.767 rmmod nvme_tcp 00:21:35.767 rmmod nvme_fabrics 00:21:35.767 rmmod nvme_keyring 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@513 -- # '[' -n 4021478 ']' 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # killprocess 4021478 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 4021478 ']' 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 4021478 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4021478 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4021478' 00:21:35.767 killing process with pid 4021478 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 4021478 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 4021478 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-save 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@787 -- # iptables-restore 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.767 15:18:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.678 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:37.678 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:37.678 00:21:37.678 real 0m54.089s 00:21:37.678 user 2m50.264s 00:21:37.678 sys 0m10.908s 00:21:37.678 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:37.678 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:37.678 ************************************ 00:21:37.678 END TEST nvmf_perf_adq 00:21:37.678 ************************************ 00:21:37.678 15:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:37.678 15:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:37.678 15:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:37.678 15:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:37.678 ************************************ 00:21:37.678 START TEST nvmf_shutdown 00:21:37.678 ************************************ 00:21:37.678 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:37.939 * Looking for test storage... 00:21:37.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:37.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.939 --rc genhtml_branch_coverage=1 00:21:37.939 --rc genhtml_function_coverage=1 00:21:37.939 --rc genhtml_legend=1 00:21:37.939 --rc geninfo_all_blocks=1 00:21:37.939 --rc geninfo_unexecuted_blocks=1 00:21:37.939 00:21:37.939 ' 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:37.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.939 --rc genhtml_branch_coverage=1 00:21:37.939 --rc genhtml_function_coverage=1 00:21:37.939 --rc genhtml_legend=1 00:21:37.939 --rc geninfo_all_blocks=1 00:21:37.939 --rc geninfo_unexecuted_blocks=1 00:21:37.939 00:21:37.939 ' 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:37.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.939 --rc genhtml_branch_coverage=1 00:21:37.939 --rc genhtml_function_coverage=1 00:21:37.939 --rc genhtml_legend=1 00:21:37.939 --rc geninfo_all_blocks=1 00:21:37.939 --rc geninfo_unexecuted_blocks=1 00:21:37.939 00:21:37.939 ' 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:37.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.939 --rc genhtml_branch_coverage=1 00:21:37.939 --rc genhtml_function_coverage=1 00:21:37.939 --rc genhtml_legend=1 00:21:37.939 --rc geninfo_all_blocks=1 00:21:37.939 --rc geninfo_unexecuted_blocks=1 00:21:37.939 00:21:37.939 ' 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:37.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:37.939 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:37.940 ************************************ 00:21:37.940 START TEST nvmf_shutdown_tc1 00:21:37.940 ************************************ 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:37.940 15:18:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:44.534 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:44.534 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:44.534 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:44.534 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:44.534 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # is_hw=yes 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.535 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:44.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:21:44.796 00:21:44.796 --- 10.0.0.2 ping statistics --- 00:21:44.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.796 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:21:44.796 00:21:44.796 --- 10.0.0.1 ping statistics --- 00:21:44.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.796 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # return 0 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # nvmfpid=4027922 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # waitforlisten 4027922 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 4027922 ']' 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:44.796 15:18:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:45.057 [2024-10-01 15:18:54.661134] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:21:45.058 [2024-10-01 15:18:54.661217] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.058 [2024-10-01 15:18:54.749352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:45.058 [2024-10-01 15:18:54.843468] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.058 [2024-10-01 15:18:54.843529] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.058 [2024-10-01 15:18:54.843537] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.058 [2024-10-01 15:18:54.843544] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.058 [2024-10-01 15:18:54.843551] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.058 [2024-10-01 15:18:54.843716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.058 [2024-10-01 15:18:54.843876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.058 [2024-10-01 15:18:54.844062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.058 [2024-10-01 15:18:54.844062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:21:45.629 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.629 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:45.629 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:45.629 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:45.629 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.889 [2024-10-01 15:18:55.508170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.889 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.890 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.890 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.890 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.890 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.890 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.890 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.890 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.890 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.890 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:45.890 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.890 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.890 Malloc1 00:21:45.890 [2024-10-01 15:18:55.615658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.890 Malloc2 00:21:45.890 Malloc3 00:21:45.890 Malloc4 00:21:46.150 Malloc5 00:21:46.150 Malloc6 00:21:46.150 Malloc7 00:21:46.150 Malloc8 00:21:46.150 Malloc9 00:21:46.150 Malloc10 00:21:46.150 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.150 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:46.150 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:46.150 15:18:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.411 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=4028161 00:21:46.411 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 4028161 /var/tmp/bdevperf.sock 00:21:46.411 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 4028161 ']' 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:46.412 { 00:21:46.412 "params": { 00:21:46.412 "name": "Nvme$subsystem", 00:21:46.412 "trtype": "$TEST_TRANSPORT", 00:21:46.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.412 "adrfam": "ipv4", 00:21:46.412 "trsvcid": "$NVMF_PORT", 00:21:46.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.412 "hdgst": ${hdgst:-false}, 00:21:46.412 "ddgst": ${ddgst:-false} 00:21:46.412 }, 00:21:46.412 "method": "bdev_nvme_attach_controller" 00:21:46.412 } 00:21:46.412 EOF 00:21:46.412 )") 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:46.412 { 00:21:46.412 "params": { 00:21:46.412 "name": "Nvme$subsystem", 00:21:46.412 "trtype": "$TEST_TRANSPORT", 00:21:46.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.412 "adrfam": "ipv4", 00:21:46.412 "trsvcid": "$NVMF_PORT", 00:21:46.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.412 "hdgst": ${hdgst:-false}, 00:21:46.412 "ddgst": ${ddgst:-false} 00:21:46.412 }, 00:21:46.412 "method": "bdev_nvme_attach_controller" 00:21:46.412 } 00:21:46.412 EOF 00:21:46.412 )") 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:46.412 { 00:21:46.412 "params": { 00:21:46.412 "name": "Nvme$subsystem", 00:21:46.412 "trtype": "$TEST_TRANSPORT", 00:21:46.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.412 "adrfam": "ipv4", 00:21:46.412 "trsvcid": "$NVMF_PORT", 00:21:46.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.412 "hdgst": ${hdgst:-false}, 00:21:46.412 "ddgst": ${ddgst:-false} 00:21:46.412 }, 00:21:46.412 "method": "bdev_nvme_attach_controller" 00:21:46.412 } 00:21:46.412 EOF 00:21:46.412 )") 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:46.412 { 00:21:46.412 "params": { 00:21:46.412 "name": "Nvme$subsystem", 00:21:46.412 "trtype": "$TEST_TRANSPORT", 00:21:46.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.412 "adrfam": "ipv4", 00:21:46.412 "trsvcid": "$NVMF_PORT", 00:21:46.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.412 "hdgst": ${hdgst:-false}, 00:21:46.412 "ddgst": ${ddgst:-false} 00:21:46.412 }, 00:21:46.412 "method": "bdev_nvme_attach_controller" 00:21:46.412 } 00:21:46.412 EOF 00:21:46.412 )") 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:46.412 { 00:21:46.412 "params": { 00:21:46.412 "name": "Nvme$subsystem", 00:21:46.412 "trtype": "$TEST_TRANSPORT", 00:21:46.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.412 "adrfam": "ipv4", 00:21:46.412 "trsvcid": "$NVMF_PORT", 00:21:46.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.412 "hdgst": ${hdgst:-false}, 00:21:46.412 "ddgst": ${ddgst:-false} 00:21:46.412 }, 00:21:46.412 "method": "bdev_nvme_attach_controller" 00:21:46.412 } 00:21:46.412 EOF 00:21:46.412 )") 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:46.412 { 00:21:46.412 "params": { 00:21:46.412 "name": "Nvme$subsystem", 00:21:46.412 "trtype": "$TEST_TRANSPORT", 00:21:46.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.412 "adrfam": "ipv4", 00:21:46.412 "trsvcid": "$NVMF_PORT", 00:21:46.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.412 "hdgst": ${hdgst:-false}, 00:21:46.412 "ddgst": ${ddgst:-false} 00:21:46.412 }, 00:21:46.412 "method": "bdev_nvme_attach_controller" 00:21:46.412 } 00:21:46.412 EOF 00:21:46.412 )") 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:46.412 { 00:21:46.412 "params": { 00:21:46.412 "name": "Nvme$subsystem", 00:21:46.412 "trtype": "$TEST_TRANSPORT", 00:21:46.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.412 "adrfam": "ipv4", 00:21:46.412 "trsvcid": "$NVMF_PORT", 00:21:46.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.412 "hdgst": ${hdgst:-false}, 00:21:46.412 "ddgst": ${ddgst:-false} 00:21:46.412 }, 00:21:46.412 "method": "bdev_nvme_attach_controller" 00:21:46.412 } 00:21:46.412 EOF 00:21:46.412 )") 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:46.412 [2024-10-01 15:18:56.084329] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:21:46.412 [2024-10-01 15:18:56.084398] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:46.412 { 00:21:46.412 "params": { 00:21:46.412 "name": "Nvme$subsystem", 00:21:46.412 "trtype": "$TEST_TRANSPORT", 00:21:46.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.412 "adrfam": "ipv4", 00:21:46.412 "trsvcid": "$NVMF_PORT", 00:21:46.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.412 "hdgst": ${hdgst:-false}, 00:21:46.412 "ddgst": ${ddgst:-false} 00:21:46.412 }, 00:21:46.412 "method": "bdev_nvme_attach_controller" 00:21:46.412 } 00:21:46.412 EOF 00:21:46.412 )") 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:46.412 { 00:21:46.412 "params": { 00:21:46.412 "name": "Nvme$subsystem", 00:21:46.412 "trtype": "$TEST_TRANSPORT", 00:21:46.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.412 "adrfam": "ipv4", 00:21:46.412 "trsvcid": "$NVMF_PORT", 00:21:46.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.412 "hdgst": ${hdgst:-false}, 00:21:46.412 "ddgst": ${ddgst:-false} 00:21:46.412 }, 00:21:46.412 "method": "bdev_nvme_attach_controller" 00:21:46.412 } 00:21:46.412 EOF 00:21:46.412 )") 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:46.412 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:46.412 { 00:21:46.412 "params": { 00:21:46.412 "name": "Nvme$subsystem", 00:21:46.412 "trtype": "$TEST_TRANSPORT", 00:21:46.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.413 "adrfam": "ipv4", 00:21:46.413 "trsvcid": "$NVMF_PORT", 00:21:46.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.413 "hdgst": ${hdgst:-false}, 00:21:46.413 "ddgst": ${ddgst:-false} 00:21:46.413 }, 00:21:46.413 "method": "bdev_nvme_attach_controller" 00:21:46.413 } 00:21:46.413 EOF 00:21:46.413 )") 00:21:46.413 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:46.413 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:21:46.413 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:21:46.413 15:18:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:21:46.413 "params": { 00:21:46.413 "name": "Nvme1", 00:21:46.413 "trtype": "tcp", 00:21:46.413 "traddr": "10.0.0.2", 00:21:46.413 "adrfam": "ipv4", 00:21:46.413 "trsvcid": "4420", 00:21:46.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.413 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:46.413 "hdgst": false, 00:21:46.413 "ddgst": false 00:21:46.413 }, 00:21:46.413 "method": "bdev_nvme_attach_controller" 00:21:46.413 },{ 00:21:46.413 "params": { 00:21:46.413 "name": "Nvme2", 00:21:46.413 "trtype": "tcp", 00:21:46.413 "traddr": "10.0.0.2", 00:21:46.413 "adrfam": "ipv4", 00:21:46.413 "trsvcid": "4420", 00:21:46.413 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:46.413 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:46.413 "hdgst": false, 00:21:46.413 "ddgst": false 00:21:46.413 }, 00:21:46.413 "method": "bdev_nvme_attach_controller" 00:21:46.413 },{ 00:21:46.413 "params": { 00:21:46.413 "name": "Nvme3", 00:21:46.413 "trtype": "tcp", 00:21:46.413 "traddr": "10.0.0.2", 00:21:46.413 "adrfam": "ipv4", 00:21:46.413 "trsvcid": "4420", 00:21:46.413 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:46.413 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:46.413 "hdgst": false, 00:21:46.413 "ddgst": false 00:21:46.413 }, 00:21:46.413 "method": "bdev_nvme_attach_controller" 00:21:46.413 },{ 00:21:46.413 "params": { 00:21:46.413 "name": "Nvme4", 00:21:46.413 "trtype": "tcp", 00:21:46.413 "traddr": "10.0.0.2", 00:21:46.413 "adrfam": "ipv4", 00:21:46.413 "trsvcid": "4420", 00:21:46.413 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:46.413 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:46.413 "hdgst": false, 00:21:46.413 "ddgst": false 00:21:46.413 }, 00:21:46.413 "method": "bdev_nvme_attach_controller" 00:21:46.413 },{ 00:21:46.413 "params": { 00:21:46.413 "name": "Nvme5", 00:21:46.413 "trtype": "tcp", 00:21:46.413 "traddr": "10.0.0.2", 00:21:46.413 "adrfam": "ipv4", 00:21:46.413 "trsvcid": "4420", 00:21:46.413 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:46.413 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:46.413 "hdgst": false, 00:21:46.413 "ddgst": false 00:21:46.413 }, 00:21:46.413 "method": "bdev_nvme_attach_controller" 00:21:46.413 },{ 00:21:46.413 "params": { 00:21:46.413 "name": "Nvme6", 00:21:46.413 "trtype": "tcp", 00:21:46.413 "traddr": "10.0.0.2", 00:21:46.413 "adrfam": "ipv4", 00:21:46.413 "trsvcid": "4420", 00:21:46.413 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:46.413 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:46.413 "hdgst": false, 00:21:46.413 "ddgst": false 00:21:46.413 }, 00:21:46.413 "method": "bdev_nvme_attach_controller" 00:21:46.413 },{ 00:21:46.413 "params": { 00:21:46.413 "name": "Nvme7", 00:21:46.413 "trtype": "tcp", 00:21:46.413 "traddr": "10.0.0.2", 00:21:46.413 "adrfam": "ipv4", 00:21:46.413 "trsvcid": "4420", 00:21:46.413 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:46.413 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:46.413 "hdgst": false, 00:21:46.413 "ddgst": false 00:21:46.413 }, 00:21:46.413 "method": "bdev_nvme_attach_controller" 00:21:46.413 },{ 00:21:46.413 "params": { 00:21:46.413 "name": "Nvme8", 00:21:46.413 "trtype": "tcp", 00:21:46.413 "traddr": "10.0.0.2", 00:21:46.413 "adrfam": "ipv4", 00:21:46.413 "trsvcid": "4420", 00:21:46.413 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:46.413 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:46.413 "hdgst": false, 00:21:46.413 "ddgst": false 00:21:46.413 }, 00:21:46.413 "method": "bdev_nvme_attach_controller" 00:21:46.413 },{ 00:21:46.413 "params": { 00:21:46.413 "name": "Nvme9", 00:21:46.413 "trtype": "tcp", 00:21:46.413 "traddr": "10.0.0.2", 00:21:46.413 "adrfam": "ipv4", 00:21:46.413 "trsvcid": "4420", 00:21:46.413 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:46.413 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:46.413 "hdgst": false, 00:21:46.413 "ddgst": false 00:21:46.413 }, 00:21:46.413 "method": "bdev_nvme_attach_controller" 00:21:46.413 },{ 00:21:46.413 "params": { 00:21:46.413 "name": "Nvme10", 00:21:46.413 "trtype": "tcp", 00:21:46.413 "traddr": "10.0.0.2", 00:21:46.413 "adrfam": "ipv4", 00:21:46.413 "trsvcid": "4420", 00:21:46.413 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:46.413 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:46.413 "hdgst": false, 00:21:46.413 "ddgst": false 00:21:46.413 }, 00:21:46.413 "method": "bdev_nvme_attach_controller" 00:21:46.413 }' 00:21:46.413 [2024-10-01 15:18:56.148202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.413 [2024-10-01 15:18:56.213578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.797 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.797 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:47.797 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:47.797 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.797 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:47.797 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.797 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 4028161 00:21:47.797 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:47.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 4028161 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:47.797 15:18:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 4027922 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # config=() 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # local subsystem config 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:48.740 { 00:21:48.740 "params": { 00:21:48.740 "name": "Nvme$subsystem", 00:21:48.740 "trtype": "$TEST_TRANSPORT", 00:21:48.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.740 "adrfam": "ipv4", 00:21:48.740 "trsvcid": "$NVMF_PORT", 00:21:48.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.740 "hdgst": ${hdgst:-false}, 00:21:48.740 "ddgst": ${ddgst:-false} 00:21:48.740 }, 00:21:48.740 "method": "bdev_nvme_attach_controller" 00:21:48.740 } 00:21:48.740 EOF 00:21:48.740 )") 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:48.740 { 00:21:48.740 "params": { 00:21:48.740 "name": "Nvme$subsystem", 00:21:48.740 "trtype": "$TEST_TRANSPORT", 00:21:48.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.740 "adrfam": "ipv4", 00:21:48.740 "trsvcid": "$NVMF_PORT", 00:21:48.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.740 "hdgst": ${hdgst:-false}, 00:21:48.740 "ddgst": ${ddgst:-false} 00:21:48.740 }, 00:21:48.740 "method": "bdev_nvme_attach_controller" 00:21:48.740 } 00:21:48.740 EOF 00:21:48.740 )") 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:48.740 { 00:21:48.740 "params": { 00:21:48.740 "name": "Nvme$subsystem", 00:21:48.740 "trtype": "$TEST_TRANSPORT", 00:21:48.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.740 "adrfam": "ipv4", 00:21:48.740 "trsvcid": "$NVMF_PORT", 00:21:48.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.740 "hdgst": ${hdgst:-false}, 00:21:48.740 "ddgst": ${ddgst:-false} 00:21:48.740 }, 00:21:48.740 "method": "bdev_nvme_attach_controller" 00:21:48.740 } 00:21:48.740 EOF 00:21:48.740 )") 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:48.740 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:48.740 { 00:21:48.740 "params": { 00:21:48.740 "name": "Nvme$subsystem", 00:21:48.741 "trtype": "$TEST_TRANSPORT", 00:21:48.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "$NVMF_PORT", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.741 "hdgst": ${hdgst:-false}, 00:21:48.741 "ddgst": ${ddgst:-false} 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 } 00:21:48.741 EOF 00:21:48.741 )") 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:48.741 { 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme$subsystem", 00:21:48.741 "trtype": "$TEST_TRANSPORT", 00:21:48.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "$NVMF_PORT", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.741 "hdgst": ${hdgst:-false}, 00:21:48.741 "ddgst": ${ddgst:-false} 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 } 00:21:48.741 EOF 00:21:48.741 )") 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:48.741 { 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme$subsystem", 00:21:48.741 "trtype": "$TEST_TRANSPORT", 00:21:48.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "$NVMF_PORT", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.741 "hdgst": ${hdgst:-false}, 00:21:48.741 "ddgst": ${ddgst:-false} 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 } 00:21:48.741 EOF 00:21:48.741 )") 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:48.741 [2024-10-01 15:18:58.540715] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:21:48.741 [2024-10-01 15:18:58.540774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4028717 ] 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:48.741 { 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme$subsystem", 00:21:48.741 "trtype": "$TEST_TRANSPORT", 00:21:48.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "$NVMF_PORT", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.741 "hdgst": ${hdgst:-false}, 00:21:48.741 "ddgst": ${ddgst:-false} 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 } 00:21:48.741 EOF 00:21:48.741 )") 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:48.741 { 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme$subsystem", 00:21:48.741 "trtype": "$TEST_TRANSPORT", 00:21:48.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "$NVMF_PORT", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.741 "hdgst": ${hdgst:-false}, 00:21:48.741 "ddgst": ${ddgst:-false} 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 } 00:21:48.741 EOF 00:21:48.741 )") 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:48.741 { 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme$subsystem", 00:21:48.741 "trtype": "$TEST_TRANSPORT", 00:21:48.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "$NVMF_PORT", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.741 "hdgst": ${hdgst:-false}, 00:21:48.741 "ddgst": ${ddgst:-false} 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 } 00:21:48.741 EOF 00:21:48.741 )") 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:48.741 { 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme$subsystem", 00:21:48.741 "trtype": "$TEST_TRANSPORT", 00:21:48.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "$NVMF_PORT", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:48.741 "hdgst": ${hdgst:-false}, 00:21:48.741 "ddgst": ${ddgst:-false} 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 } 00:21:48.741 EOF 00:21:48.741 )") 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@578 -- # cat 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # jq . 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@581 -- # IFS=, 00:21:48.741 15:18:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme1", 00:21:48.741 "trtype": "tcp", 00:21:48.741 "traddr": "10.0.0.2", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "4420", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.741 "hdgst": false, 00:21:48.741 "ddgst": false 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 },{ 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme2", 00:21:48.741 "trtype": "tcp", 00:21:48.741 "traddr": "10.0.0.2", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "4420", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:48.741 "hdgst": false, 00:21:48.741 "ddgst": false 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 },{ 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme3", 00:21:48.741 "trtype": "tcp", 00:21:48.741 "traddr": "10.0.0.2", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "4420", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:48.741 "hdgst": false, 00:21:48.741 "ddgst": false 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 },{ 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme4", 00:21:48.741 "trtype": "tcp", 00:21:48.741 "traddr": "10.0.0.2", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "4420", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:48.741 "hdgst": false, 00:21:48.741 "ddgst": false 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 },{ 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme5", 00:21:48.741 "trtype": "tcp", 00:21:48.741 "traddr": "10.0.0.2", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "4420", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:48.741 "hdgst": false, 00:21:48.741 "ddgst": false 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 },{ 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme6", 00:21:48.741 "trtype": "tcp", 00:21:48.741 "traddr": "10.0.0.2", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "4420", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:48.741 "hdgst": false, 00:21:48.741 "ddgst": false 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 },{ 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme7", 00:21:48.741 "trtype": "tcp", 00:21:48.741 "traddr": "10.0.0.2", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "4420", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:48.741 "hdgst": false, 00:21:48.741 "ddgst": false 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 },{ 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme8", 00:21:48.741 "trtype": "tcp", 00:21:48.741 "traddr": "10.0.0.2", 00:21:48.741 "adrfam": "ipv4", 00:21:48.741 "trsvcid": "4420", 00:21:48.741 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:48.741 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:48.741 "hdgst": false, 00:21:48.741 "ddgst": false 00:21:48.741 }, 00:21:48.741 "method": "bdev_nvme_attach_controller" 00:21:48.741 },{ 00:21:48.741 "params": { 00:21:48.741 "name": "Nvme9", 00:21:48.741 "trtype": "tcp", 00:21:48.742 "traddr": "10.0.0.2", 00:21:48.742 "adrfam": "ipv4", 00:21:48.742 "trsvcid": "4420", 00:21:48.742 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:48.742 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:48.742 "hdgst": false, 00:21:48.742 "ddgst": false 00:21:48.742 }, 00:21:48.742 "method": "bdev_nvme_attach_controller" 00:21:48.742 },{ 00:21:48.742 "params": { 00:21:48.742 "name": "Nvme10", 00:21:48.742 "trtype": "tcp", 00:21:48.742 "traddr": "10.0.0.2", 00:21:48.742 "adrfam": "ipv4", 00:21:48.742 "trsvcid": "4420", 00:21:48.742 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:48.742 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:48.742 "hdgst": false, 00:21:48.742 "ddgst": false 00:21:48.742 }, 00:21:48.742 "method": "bdev_nvme_attach_controller" 00:21:48.742 }' 00:21:49.001 [2024-10-01 15:18:58.603018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.001 [2024-10-01 15:18:58.667762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.393 Running I/O for 1 seconds... 00:21:51.333 1810.00 IOPS, 113.12 MiB/s 00:21:51.333 Latency(us) 00:21:51.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.333 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.333 Verification LBA range: start 0x0 length 0x400 00:21:51.333 Nvme1n1 : 1.07 179.65 11.23 0.00 0.00 352643.70 21080.75 281367.89 00:21:51.333 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.333 Verification LBA range: start 0x0 length 0x400 00:21:51.333 Nvme2n1 : 1.19 219.63 13.73 0.00 0.00 270011.56 10267.31 249910.61 00:21:51.333 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.333 Verification LBA range: start 0x0 length 0x400 00:21:51.333 Nvme3n1 : 1.12 228.44 14.28 0.00 0.00 267658.24 39103.15 244667.73 00:21:51.333 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.333 Verification LBA range: start 0x0 length 0x400 00:21:51.333 Nvme4n1 : 1.19 269.05 16.82 0.00 0.00 218491.05 19770.03 225443.84 00:21:51.333 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.333 Verification LBA range: start 0x0 length 0x400 00:21:51.333 Nvme5n1 : 1.16 221.21 13.83 0.00 0.00 266209.92 15837.87 251658.24 00:21:51.333 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.333 Verification LBA range: start 0x0 length 0x400 00:21:51.333 Nvme6n1 : 1.20 267.04 16.69 0.00 0.00 216731.31 21189.97 242920.11 00:21:51.333 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.333 Verification LBA range: start 0x0 length 0x400 00:21:51.333 Nvme7n1 : 1.19 267.94 16.75 0.00 0.00 211830.10 14964.05 304087.04 00:21:51.333 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.333 Verification LBA range: start 0x0 length 0x400 00:21:51.333 Nvme8n1 : 1.21 265.46 16.59 0.00 0.00 211563.01 11905.71 256901.12 00:21:51.333 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.333 Verification LBA range: start 0x0 length 0x400 00:21:51.333 Nvme9n1 : 1.17 219.55 13.72 0.00 0.00 249804.16 17803.95 276125.01 00:21:51.333 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.333 Verification LBA range: start 0x0 length 0x400 00:21:51.333 Nvme10n1 : 1.21 264.18 16.51 0.00 0.00 204740.69 6116.69 253405.87 00:21:51.333 =================================================================================================================== 00:21:51.333 Total : 2402.17 150.14 0.00 0.00 240721.27 6116.69 304087.04 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:51.594 rmmod nvme_tcp 00:21:51.594 rmmod nvme_fabrics 00:21:51.594 rmmod nvme_keyring 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@513 -- # '[' -n 4027922 ']' 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # killprocess 4027922 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 4027922 ']' 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 4027922 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4027922 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4027922' 00:21:51.594 killing process with pid 4027922 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 4027922 00:21:51.594 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 4027922 00:21:51.876 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:51.876 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:51.876 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:51.876 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:51.876 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-save 00:21:51.876 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:51.876 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@787 -- # iptables-restore 00:21:51.876 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:51.876 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:51.876 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.876 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.876 15:19:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.419 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.419 00:21:54.419 real 0m15.973s 00:21:54.419 user 0m32.590s 00:21:54.419 sys 0m6.606s 00:21:54.419 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:54.420 ************************************ 00:21:54.420 END TEST nvmf_shutdown_tc1 00:21:54.420 ************************************ 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:54.420 ************************************ 00:21:54.420 START TEST nvmf_shutdown_tc2 00:21:54.420 ************************************ 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:54.420 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:54.420 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:54.420 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:54.420 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # is_hw=yes 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:54.420 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:54.421 15:19:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:54.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:21:54.421 00:21:54.421 --- 10.0.0.2 ping statistics --- 00:21:54.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.421 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:21:54.421 00:21:54.421 --- 10.0.0.1 ping statistics --- 00:21:54.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.421 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # return 0 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # nvmfpid=4029836 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # waitforlisten 4029836 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 4029836 ']' 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.421 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.421 [2024-10-01 15:19:04.190693] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:21:54.421 [2024-10-01 15:19:04.190744] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.421 [2024-10-01 15:19:04.242339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.681 [2024-10-01 15:19:04.299068] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.681 [2024-10-01 15:19:04.299097] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.681 [2024-10-01 15:19:04.299103] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.681 [2024-10-01 15:19:04.299108] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.681 [2024-10-01 15:19:04.299112] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.681 [2024-10-01 15:19:04.299348] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.681 [2024-10-01 15:19:04.299512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:54.681 [2024-10-01 15:19:04.299670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.681 [2024-10-01 15:19:04.299672] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:21:55.259 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.259 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:55.259 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:55.259 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.259 15:19:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.259 [2024-10-01 15:19:05.029545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.259 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.259 Malloc1 00:21:55.568 [2024-10-01 15:19:05.128426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.568 Malloc2 00:21:55.568 Malloc3 00:21:55.568 Malloc4 00:21:55.568 Malloc5 00:21:55.568 Malloc6 00:21:55.568 Malloc7 00:21:55.568 Malloc8 00:21:55.892 Malloc9 00:21:55.892 Malloc10 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=4030217 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 4030217 /var/tmp/bdevperf.sock 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 4030217 ']' 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # config=() 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # local subsystem config 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:55.892 { 00:21:55.892 "params": { 00:21:55.892 "name": "Nvme$subsystem", 00:21:55.892 "trtype": "$TEST_TRANSPORT", 00:21:55.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.892 "adrfam": "ipv4", 00:21:55.892 "trsvcid": "$NVMF_PORT", 00:21:55.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.892 "hdgst": ${hdgst:-false}, 00:21:55.892 "ddgst": ${ddgst:-false} 00:21:55.892 }, 00:21:55.892 "method": "bdev_nvme_attach_controller" 00:21:55.892 } 00:21:55.892 EOF 00:21:55.892 )") 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:55.892 { 00:21:55.892 "params": { 00:21:55.892 "name": "Nvme$subsystem", 00:21:55.892 "trtype": "$TEST_TRANSPORT", 00:21:55.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.892 "adrfam": "ipv4", 00:21:55.892 "trsvcid": "$NVMF_PORT", 00:21:55.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.892 "hdgst": ${hdgst:-false}, 00:21:55.892 "ddgst": ${ddgst:-false} 00:21:55.892 }, 00:21:55.892 "method": "bdev_nvme_attach_controller" 00:21:55.892 } 00:21:55.892 EOF 00:21:55.892 )") 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:55.892 { 00:21:55.892 "params": { 00:21:55.892 "name": "Nvme$subsystem", 00:21:55.892 "trtype": "$TEST_TRANSPORT", 00:21:55.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.892 "adrfam": "ipv4", 00:21:55.892 "trsvcid": "$NVMF_PORT", 00:21:55.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.892 "hdgst": ${hdgst:-false}, 00:21:55.892 "ddgst": ${ddgst:-false} 00:21:55.892 }, 00:21:55.892 "method": "bdev_nvme_attach_controller" 00:21:55.892 } 00:21:55.892 EOF 00:21:55.892 )") 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:55.892 { 00:21:55.892 "params": { 00:21:55.892 "name": "Nvme$subsystem", 00:21:55.892 "trtype": "$TEST_TRANSPORT", 00:21:55.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.892 "adrfam": "ipv4", 00:21:55.892 "trsvcid": "$NVMF_PORT", 00:21:55.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.892 "hdgst": ${hdgst:-false}, 00:21:55.892 "ddgst": ${ddgst:-false} 00:21:55.892 }, 00:21:55.892 "method": "bdev_nvme_attach_controller" 00:21:55.892 } 00:21:55.892 EOF 00:21:55.892 )") 00:21:55.892 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:55.893 { 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme$subsystem", 00:21:55.893 "trtype": "$TEST_TRANSPORT", 00:21:55.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "$NVMF_PORT", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.893 "hdgst": ${hdgst:-false}, 00:21:55.893 "ddgst": ${ddgst:-false} 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 } 00:21:55.893 EOF 00:21:55.893 )") 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:55.893 { 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme$subsystem", 00:21:55.893 "trtype": "$TEST_TRANSPORT", 00:21:55.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "$NVMF_PORT", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.893 "hdgst": ${hdgst:-false}, 00:21:55.893 "ddgst": ${ddgst:-false} 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 } 00:21:55.893 EOF 00:21:55.893 )") 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:55.893 { 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme$subsystem", 00:21:55.893 "trtype": "$TEST_TRANSPORT", 00:21:55.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "$NVMF_PORT", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.893 "hdgst": ${hdgst:-false}, 00:21:55.893 "ddgst": ${ddgst:-false} 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 } 00:21:55.893 EOF 00:21:55.893 )") 00:21:55.893 [2024-10-01 15:19:05.572584] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:21:55.893 [2024-10-01 15:19:05.572640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030217 ] 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:55.893 { 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme$subsystem", 00:21:55.893 "trtype": "$TEST_TRANSPORT", 00:21:55.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "$NVMF_PORT", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.893 "hdgst": ${hdgst:-false}, 00:21:55.893 "ddgst": ${ddgst:-false} 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 } 00:21:55.893 EOF 00:21:55.893 )") 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:55.893 { 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme$subsystem", 00:21:55.893 "trtype": "$TEST_TRANSPORT", 00:21:55.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "$NVMF_PORT", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.893 "hdgst": ${hdgst:-false}, 00:21:55.893 "ddgst": ${ddgst:-false} 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 } 00:21:55.893 EOF 00:21:55.893 )") 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:21:55.893 { 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme$subsystem", 00:21:55.893 "trtype": "$TEST_TRANSPORT", 00:21:55.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "$NVMF_PORT", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.893 "hdgst": ${hdgst:-false}, 00:21:55.893 "ddgst": ${ddgst:-false} 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 } 00:21:55.893 EOF 00:21:55.893 )") 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@578 -- # cat 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # jq . 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@581 -- # IFS=, 00:21:55.893 15:19:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme1", 00:21:55.893 "trtype": "tcp", 00:21:55.893 "traddr": "10.0.0.2", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "4420", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:55.893 "hdgst": false, 00:21:55.893 "ddgst": false 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 },{ 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme2", 00:21:55.893 "trtype": "tcp", 00:21:55.893 "traddr": "10.0.0.2", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "4420", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:55.893 "hdgst": false, 00:21:55.893 "ddgst": false 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 },{ 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme3", 00:21:55.893 "trtype": "tcp", 00:21:55.893 "traddr": "10.0.0.2", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "4420", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:55.893 "hdgst": false, 00:21:55.893 "ddgst": false 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 },{ 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme4", 00:21:55.893 "trtype": "tcp", 00:21:55.893 "traddr": "10.0.0.2", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "4420", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:55.893 "hdgst": false, 00:21:55.893 "ddgst": false 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 },{ 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme5", 00:21:55.893 "trtype": "tcp", 00:21:55.893 "traddr": "10.0.0.2", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "4420", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:55.893 "hdgst": false, 00:21:55.893 "ddgst": false 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 },{ 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme6", 00:21:55.893 "trtype": "tcp", 00:21:55.893 "traddr": "10.0.0.2", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "4420", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:55.893 "hdgst": false, 00:21:55.893 "ddgst": false 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 },{ 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme7", 00:21:55.893 "trtype": "tcp", 00:21:55.893 "traddr": "10.0.0.2", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "4420", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:55.893 "hdgst": false, 00:21:55.893 "ddgst": false 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 },{ 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme8", 00:21:55.893 "trtype": "tcp", 00:21:55.893 "traddr": "10.0.0.2", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "4420", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:55.893 "hdgst": false, 00:21:55.893 "ddgst": false 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 },{ 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme9", 00:21:55.893 "trtype": "tcp", 00:21:55.893 "traddr": "10.0.0.2", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "4420", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:55.893 "hdgst": false, 00:21:55.893 "ddgst": false 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 },{ 00:21:55.893 "params": { 00:21:55.893 "name": "Nvme10", 00:21:55.893 "trtype": "tcp", 00:21:55.893 "traddr": "10.0.0.2", 00:21:55.893 "adrfam": "ipv4", 00:21:55.893 "trsvcid": "4420", 00:21:55.893 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:55.893 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:55.893 "hdgst": false, 00:21:55.893 "ddgst": false 00:21:55.893 }, 00:21:55.893 "method": "bdev_nvme_attach_controller" 00:21:55.893 }' 00:21:55.893 [2024-10-01 15:19:05.633813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.893 [2024-10-01 15:19:05.699091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.293 Running I/O for 10 seconds... 00:21:57.293 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.293 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:57.293 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:57.293 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.293 15:19:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.293 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.293 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:57.293 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:57.294 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:57.294 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:57.294 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:57.294 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:57.294 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:57.554 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:57.554 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:57.554 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.554 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.554 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.554 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:57.554 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:57.554 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:57.814 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:57.814 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:57.814 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:57.814 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:57.814 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.814 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.814 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.814 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:57.814 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:57.814 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=137 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 137 -ge 100 ']' 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 4030217 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 4030217 ']' 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 4030217 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4030217 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4030217' 00:21:58.075 killing process with pid 4030217 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 4030217 00:21:58.075 15:19:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 4030217 00:21:58.337 Received shutdown signal, test time was about 0.985961 seconds 00:21:58.337 00:21:58.337 Latency(us) 00:21:58.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.337 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.337 Verification LBA range: start 0x0 length 0x400 00:21:58.337 Nvme1n1 : 0.98 262.51 16.41 0.00 0.00 240833.92 19770.03 220200.96 00:21:58.337 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.337 Verification LBA range: start 0x0 length 0x400 00:21:58.337 Nvme2n1 : 0.98 261.39 16.34 0.00 0.00 236885.12 25122.13 255153.49 00:21:58.337 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.337 Verification LBA range: start 0x0 length 0x400 00:21:58.337 Nvme3n1 : 0.97 263.20 16.45 0.00 0.00 230570.45 19005.44 246415.36 00:21:58.337 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.337 Verification LBA range: start 0x0 length 0x400 00:21:58.337 Nvme4n1 : 0.97 270.96 16.94 0.00 0.00 217775.15 7645.87 242920.11 00:21:58.337 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.337 Verification LBA range: start 0x0 length 0x400 00:21:58.337 Nvme5n1 : 0.95 201.29 12.58 0.00 0.00 288264.68 13926.40 249910.61 00:21:58.337 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.337 Verification LBA range: start 0x0 length 0x400 00:21:58.337 Nvme6n1 : 0.99 257.85 16.12 0.00 0.00 220499.83 17148.59 251658.24 00:21:58.337 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.337 Verification LBA range: start 0x0 length 0x400 00:21:58.337 Nvme7n1 : 0.95 212.07 13.25 0.00 0.00 258934.92 4123.31 251658.24 00:21:58.337 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.337 Verification LBA range: start 0x0 length 0x400 00:21:58.337 Nvme8n1 : 0.98 261.13 16.32 0.00 0.00 207951.36 22063.79 242920.11 00:21:58.337 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.337 Verification LBA range: start 0x0 length 0x400 00:21:58.337 Nvme9n1 : 0.96 199.00 12.44 0.00 0.00 265872.21 21299.20 277872.64 00:21:58.337 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.337 Verification LBA range: start 0x0 length 0x400 00:21:58.337 Nvme10n1 : 0.97 198.80 12.42 0.00 0.00 259525.40 20753.07 249910.61 00:21:58.337 =================================================================================================================== 00:21:58.337 Total : 2388.21 149.26 0.00 0.00 239918.34 4123.31 277872.64 00:21:58.337 15:19:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:59.278 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 4029836 00:21:59.278 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:59.278 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:59.278 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:59.278 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:59.278 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:59.278 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:59.278 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:59.278 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.278 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:59.278 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.278 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.278 rmmod nvme_tcp 00:21:59.538 rmmod nvme_fabrics 00:21:59.538 rmmod nvme_keyring 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@513 -- # '[' -n 4029836 ']' 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # killprocess 4029836 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 4029836 ']' 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 4029836 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4029836 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4029836' 00:21:59.538 killing process with pid 4029836 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 4029836 00:21:59.538 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 4029836 00:21:59.798 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:59.798 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:59.798 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:59.798 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:59.798 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-save 00:21:59.798 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:59.798 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@787 -- # iptables-restore 00:21:59.798 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.799 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.799 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.799 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.799 15:19:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.709 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.709 00:22:01.709 real 0m7.788s 00:22:01.709 user 0m23.485s 00:22:01.709 sys 0m1.230s 00:22:01.709 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:01.709 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.709 ************************************ 00:22:01.709 END TEST nvmf_shutdown_tc2 00:22:01.709 ************************************ 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:01.970 ************************************ 00:22:01.970 START TEST nvmf_shutdown_tc3 00:22:01.970 ************************************ 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.970 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:01.971 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:01.971 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:01.971 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:01.971 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # is_hw=yes 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.971 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:22:02.233 00:22:02.233 --- 10.0.0.2 ping statistics --- 00:22:02.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.233 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:22:02.233 00:22:02.233 --- 10.0.0.1 ping statistics --- 00:22:02.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.233 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # return 0 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:02.233 15:19:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:02.233 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:02.233 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:02.234 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.234 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.234 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # nvmfpid=4031622 00:22:02.234 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # waitforlisten 4031622 00:22:02.234 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 4031622 ']' 00:22:02.234 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.234 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.234 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.234 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.234 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.234 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:02.494 [2024-10-01 15:19:12.097390] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:22:02.494 [2024-10-01 15:19:12.097474] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.494 [2024-10-01 15:19:12.185488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.494 [2024-10-01 15:19:12.247323] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.494 [2024-10-01 15:19:12.247358] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.494 [2024-10-01 15:19:12.247364] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.494 [2024-10-01 15:19:12.247369] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.494 [2024-10-01 15:19:12.247374] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.494 [2024-10-01 15:19:12.247485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.494 [2024-10-01 15:19:12.247645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.494 [2024-10-01 15:19:12.247804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.494 [2024-10-01 15:19:12.247806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:22:03.064 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.064 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:03.064 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:03.064 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.064 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.327 [2024-10-01 15:19:12.949533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.327 15:19:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.327 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.327 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.327 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.327 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.327 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:03.327 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.327 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.327 Malloc1 00:22:03.327 [2024-10-01 15:19:13.048425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.327 Malloc2 00:22:03.327 Malloc3 00:22:03.327 Malloc4 00:22:03.327 Malloc5 00:22:03.588 Malloc6 00:22:03.588 Malloc7 00:22:03.588 Malloc8 00:22:03.588 Malloc9 00:22:03.588 Malloc10 00:22:03.588 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.588 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:03.588 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.588 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.588 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=4031853 00:22:03.588 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 4031853 /var/tmp/bdevperf.sock 00:22:03.588 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 4031853 ']' 00:22:03.588 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.850 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.850 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.850 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:03.850 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.850 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:03.850 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.850 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # config=() 00:22:03.850 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # local subsystem config 00:22:03.850 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:03.850 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:03.850 { 00:22:03.850 "params": { 00:22:03.850 "name": "Nvme$subsystem", 00:22:03.850 "trtype": "$TEST_TRANSPORT", 00:22:03.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.850 "adrfam": "ipv4", 00:22:03.850 "trsvcid": "$NVMF_PORT", 00:22:03.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.850 "hdgst": ${hdgst:-false}, 00:22:03.850 "ddgst": ${ddgst:-false} 00:22:03.850 }, 00:22:03.850 "method": "bdev_nvme_attach_controller" 00:22:03.850 } 00:22:03.850 EOF 00:22:03.850 )") 00:22:03.850 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:03.850 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:03.851 { 00:22:03.851 "params": { 00:22:03.851 "name": "Nvme$subsystem", 00:22:03.851 "trtype": "$TEST_TRANSPORT", 00:22:03.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.851 "adrfam": "ipv4", 00:22:03.851 "trsvcid": "$NVMF_PORT", 00:22:03.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.851 "hdgst": ${hdgst:-false}, 00:22:03.851 "ddgst": ${ddgst:-false} 00:22:03.851 }, 00:22:03.851 "method": "bdev_nvme_attach_controller" 00:22:03.851 } 00:22:03.851 EOF 00:22:03.851 )") 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:03.851 { 00:22:03.851 "params": { 00:22:03.851 "name": "Nvme$subsystem", 00:22:03.851 "trtype": "$TEST_TRANSPORT", 00:22:03.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.851 "adrfam": "ipv4", 00:22:03.851 "trsvcid": "$NVMF_PORT", 00:22:03.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.851 "hdgst": ${hdgst:-false}, 00:22:03.851 "ddgst": ${ddgst:-false} 00:22:03.851 }, 00:22:03.851 "method": "bdev_nvme_attach_controller" 00:22:03.851 } 00:22:03.851 EOF 00:22:03.851 )") 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:03.851 { 00:22:03.851 "params": { 00:22:03.851 "name": "Nvme$subsystem", 00:22:03.851 "trtype": "$TEST_TRANSPORT", 00:22:03.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.851 "adrfam": "ipv4", 00:22:03.851 "trsvcid": "$NVMF_PORT", 00:22:03.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.851 "hdgst": ${hdgst:-false}, 00:22:03.851 "ddgst": ${ddgst:-false} 00:22:03.851 }, 00:22:03.851 "method": "bdev_nvme_attach_controller" 00:22:03.851 } 00:22:03.851 EOF 00:22:03.851 )") 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:03.851 { 00:22:03.851 "params": { 00:22:03.851 "name": "Nvme$subsystem", 00:22:03.851 "trtype": "$TEST_TRANSPORT", 00:22:03.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.851 "adrfam": "ipv4", 00:22:03.851 "trsvcid": "$NVMF_PORT", 00:22:03.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.851 "hdgst": ${hdgst:-false}, 00:22:03.851 "ddgst": ${ddgst:-false} 00:22:03.851 }, 00:22:03.851 "method": "bdev_nvme_attach_controller" 00:22:03.851 } 00:22:03.851 EOF 00:22:03.851 )") 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:03.851 { 00:22:03.851 "params": { 00:22:03.851 "name": "Nvme$subsystem", 00:22:03.851 "trtype": "$TEST_TRANSPORT", 00:22:03.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.851 "adrfam": "ipv4", 00:22:03.851 "trsvcid": "$NVMF_PORT", 00:22:03.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.851 "hdgst": ${hdgst:-false}, 00:22:03.851 "ddgst": ${ddgst:-false} 00:22:03.851 }, 00:22:03.851 "method": "bdev_nvme_attach_controller" 00:22:03.851 } 00:22:03.851 EOF 00:22:03.851 )") 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:03.851 { 00:22:03.851 "params": { 00:22:03.851 "name": "Nvme$subsystem", 00:22:03.851 "trtype": "$TEST_TRANSPORT", 00:22:03.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.851 "adrfam": "ipv4", 00:22:03.851 "trsvcid": "$NVMF_PORT", 00:22:03.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.851 "hdgst": ${hdgst:-false}, 00:22:03.851 "ddgst": ${ddgst:-false} 00:22:03.851 }, 00:22:03.851 "method": "bdev_nvme_attach_controller" 00:22:03.851 } 00:22:03.851 EOF 00:22:03.851 )") 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:03.851 { 00:22:03.851 "params": { 00:22:03.851 "name": "Nvme$subsystem", 00:22:03.851 "trtype": "$TEST_TRANSPORT", 00:22:03.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.851 "adrfam": "ipv4", 00:22:03.851 "trsvcid": "$NVMF_PORT", 00:22:03.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.851 "hdgst": ${hdgst:-false}, 00:22:03.851 "ddgst": ${ddgst:-false} 00:22:03.851 }, 00:22:03.851 "method": "bdev_nvme_attach_controller" 00:22:03.851 } 00:22:03.851 EOF 00:22:03.851 )") 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:03.851 [2024-10-01 15:19:13.508900] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:22:03.851 [2024-10-01 15:19:13.508957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031853 ] 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:03.851 { 00:22:03.851 "params": { 00:22:03.851 "name": "Nvme$subsystem", 00:22:03.851 "trtype": "$TEST_TRANSPORT", 00:22:03.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.851 "adrfam": "ipv4", 00:22:03.851 "trsvcid": "$NVMF_PORT", 00:22:03.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.851 "hdgst": ${hdgst:-false}, 00:22:03.851 "ddgst": ${ddgst:-false} 00:22:03.851 }, 00:22:03.851 "method": "bdev_nvme_attach_controller" 00:22:03.851 } 00:22:03.851 EOF 00:22:03.851 )") 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:22:03.851 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:22:03.851 { 00:22:03.851 "params": { 00:22:03.851 "name": "Nvme$subsystem", 00:22:03.851 "trtype": "$TEST_TRANSPORT", 00:22:03.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.851 "adrfam": "ipv4", 00:22:03.851 "trsvcid": "$NVMF_PORT", 00:22:03.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.851 "hdgst": ${hdgst:-false}, 00:22:03.851 "ddgst": ${ddgst:-false} 00:22:03.851 }, 00:22:03.851 "method": "bdev_nvme_attach_controller" 00:22:03.852 } 00:22:03.852 EOF 00:22:03.852 )") 00:22:03.852 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@578 -- # cat 00:22:03.852 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # jq . 00:22:03.852 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@581 -- # IFS=, 00:22:03.852 15:19:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:22:03.852 "params": { 00:22:03.852 "name": "Nvme1", 00:22:03.852 "trtype": "tcp", 00:22:03.852 "traddr": "10.0.0.2", 00:22:03.852 "adrfam": "ipv4", 00:22:03.852 "trsvcid": "4420", 00:22:03.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.852 "hdgst": false, 00:22:03.852 "ddgst": false 00:22:03.852 }, 00:22:03.852 "method": "bdev_nvme_attach_controller" 00:22:03.852 },{ 00:22:03.852 "params": { 00:22:03.852 "name": "Nvme2", 00:22:03.852 "trtype": "tcp", 00:22:03.852 "traddr": "10.0.0.2", 00:22:03.852 "adrfam": "ipv4", 00:22:03.852 "trsvcid": "4420", 00:22:03.852 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:03.852 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:03.852 "hdgst": false, 00:22:03.852 "ddgst": false 00:22:03.852 }, 00:22:03.852 "method": "bdev_nvme_attach_controller" 00:22:03.852 },{ 00:22:03.852 "params": { 00:22:03.852 "name": "Nvme3", 00:22:03.852 "trtype": "tcp", 00:22:03.852 "traddr": "10.0.0.2", 00:22:03.852 "adrfam": "ipv4", 00:22:03.852 "trsvcid": "4420", 00:22:03.852 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:03.852 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:03.852 "hdgst": false, 00:22:03.852 "ddgst": false 00:22:03.852 }, 00:22:03.852 "method": "bdev_nvme_attach_controller" 00:22:03.852 },{ 00:22:03.852 "params": { 00:22:03.852 "name": "Nvme4", 00:22:03.852 "trtype": "tcp", 00:22:03.852 "traddr": "10.0.0.2", 00:22:03.852 "adrfam": "ipv4", 00:22:03.852 "trsvcid": "4420", 00:22:03.852 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:03.852 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:03.852 "hdgst": false, 00:22:03.852 "ddgst": false 00:22:03.852 }, 00:22:03.852 "method": "bdev_nvme_attach_controller" 00:22:03.852 },{ 00:22:03.852 "params": { 00:22:03.852 "name": "Nvme5", 00:22:03.852 "trtype": "tcp", 00:22:03.852 "traddr": "10.0.0.2", 00:22:03.852 "adrfam": "ipv4", 00:22:03.852 "trsvcid": "4420", 00:22:03.852 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:03.852 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:03.852 "hdgst": false, 00:22:03.852 "ddgst": false 00:22:03.852 }, 00:22:03.852 "method": "bdev_nvme_attach_controller" 00:22:03.852 },{ 00:22:03.852 "params": { 00:22:03.852 "name": "Nvme6", 00:22:03.852 "trtype": "tcp", 00:22:03.852 "traddr": "10.0.0.2", 00:22:03.852 "adrfam": "ipv4", 00:22:03.852 "trsvcid": "4420", 00:22:03.852 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:03.852 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:03.852 "hdgst": false, 00:22:03.852 "ddgst": false 00:22:03.852 }, 00:22:03.852 "method": "bdev_nvme_attach_controller" 00:22:03.852 },{ 00:22:03.852 "params": { 00:22:03.852 "name": "Nvme7", 00:22:03.852 "trtype": "tcp", 00:22:03.852 "traddr": "10.0.0.2", 00:22:03.852 "adrfam": "ipv4", 00:22:03.852 "trsvcid": "4420", 00:22:03.852 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:03.852 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:03.852 "hdgst": false, 00:22:03.852 "ddgst": false 00:22:03.852 }, 00:22:03.852 "method": "bdev_nvme_attach_controller" 00:22:03.852 },{ 00:22:03.852 "params": { 00:22:03.852 "name": "Nvme8", 00:22:03.852 "trtype": "tcp", 00:22:03.852 "traddr": "10.0.0.2", 00:22:03.852 "adrfam": "ipv4", 00:22:03.852 "trsvcid": "4420", 00:22:03.852 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:03.852 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:03.852 "hdgst": false, 00:22:03.852 "ddgst": false 00:22:03.852 }, 00:22:03.852 "method": "bdev_nvme_attach_controller" 00:22:03.852 },{ 00:22:03.852 "params": { 00:22:03.852 "name": "Nvme9", 00:22:03.852 "trtype": "tcp", 00:22:03.852 "traddr": "10.0.0.2", 00:22:03.852 "adrfam": "ipv4", 00:22:03.852 "trsvcid": "4420", 00:22:03.852 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:03.852 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:03.852 "hdgst": false, 00:22:03.852 "ddgst": false 00:22:03.852 }, 00:22:03.852 "method": "bdev_nvme_attach_controller" 00:22:03.852 },{ 00:22:03.852 "params": { 00:22:03.852 "name": "Nvme10", 00:22:03.852 "trtype": "tcp", 00:22:03.852 "traddr": "10.0.0.2", 00:22:03.852 "adrfam": "ipv4", 00:22:03.852 "trsvcid": "4420", 00:22:03.852 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:03.852 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:03.852 "hdgst": false, 00:22:03.852 "ddgst": false 00:22:03.852 }, 00:22:03.852 "method": "bdev_nvme_attach_controller" 00:22:03.852 }' 00:22:03.852 [2024-10-01 15:19:13.570355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.852 [2024-10-01 15:19:13.635482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.765 Running I/O for 10 seconds... 00:22:06.350 15:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:06.350 15:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:06.350 15:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:06.350 15:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.350 15:19:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 4031622 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 4031622 ']' 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 4031622 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4031622 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4031622' 00:22:06.350 killing process with pid 4031622 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 4031622 00:22:06.350 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 4031622 00:22:06.350 [2024-10-01 15:19:16.125116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.350 [2024-10-01 15:19:16.125289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.125468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cd5b0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.351 [2024-10-01 15:19:16.126702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.126706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.126711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.126716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.126720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc5e0 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.127990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.128152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cda80 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.352 [2024-10-01 15:19:16.129454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.129713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cdf50 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.353 [2024-10-01 15:19:16.130917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.130994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.131003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.131008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.131012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.131018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.131023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.131028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.131032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ce440 is same with the state(6) to be set 00:22:06.354 [2024-10-01 15:19:16.132137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-10-01 15:19:16.132543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.354 [2024-10-01 15:19:16.132552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.132981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.132988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-10-01 15:19:16.133228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.355 [2024-10-01 15:19:16.133237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.356 [2024-10-01 15:19:16.133244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.356 [2024-10-01 15:19:16.133253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.356 [2024-10-01 15:19:16.133261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.356 [2024-10-01 15:19:16.133316] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14167e0 was disconnected and freed. reset controller. 00:22:06.356 [2024-10-01 15:19:16.136626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.136882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cede0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.356 [2024-10-01 15:19:16.137564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.137731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf2b0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.357 [2024-10-01 15:19:16.138612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.138617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.138622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.138628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.138632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.138637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.138641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.138646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.138650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.138655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.138659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.138664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.138669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cf7a0 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.139420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.147959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.358 [2024-10-01 15:19:16.147981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.358 [2024-10-01 15:19:16.147990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.358 [2024-10-01 15:19:16.148003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.358 [2024-10-01 15:19:16.148011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.358 [2024-10-01 15:19:16.148018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.358 [2024-10-01 15:19:16.148026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.358 [2024-10-01 15:19:16.148033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.358 [2024-10-01 15:19:16.148041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee4920 is same with the state(6) to be set 00:22:06.358 [2024-10-01 15:19:16.148077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.358 [2024-10-01 15:19:16.148090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.358 [2024-10-01 15:19:16.148098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304ed0 is same with the state(6) to be set 00:22:06.359 [2024-10-01 15:19:16.148167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345100 is same with the state(6) to be set 00:22:06.359 [2024-10-01 15:19:16.148254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3ef0 is same with the state(6) to be set 00:22:06.359 [2024-10-01 15:19:16.148342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13088e0 is same with the state(6) to be set 00:22:06.359 [2024-10-01 15:19:16.148422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3090 is same with the state(6) to be set 00:22:06.359 [2024-10-01 15:19:16.148506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee4d80 is same with the state(6) to be set 00:22:06.359 [2024-10-01 15:19:16.148592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1340a20 is same with the state(6) to be set 00:22:06.359 [2024-10-01 15:19:16.148678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.359 [2024-10-01 15:19:16.148733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.359 [2024-10-01 15:19:16.148740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309e20 is same with the state(6) to be set 00:22:06.359 [2024-10-01 15:19:16.148789] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.359 [2024-10-01 15:19:16.148832] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.360 [2024-10-01 15:19:16.150317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.150986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.360 [2024-10-01 15:19:16.150993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.360 [2024-10-01 15:19:16.151012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.151423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.151731] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1551460 was disconnected and freed. reset controller. 00:22:06.361 [2024-10-01 15:19:16.151763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:06.361 [2024-10-01 15:19:16.151782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13088e0 (9): Bad file descriptor 00:22:06.361 [2024-10-01 15:19:16.151839] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.361 [2024-10-01 15:19:16.152292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.361 [2024-10-01 15:19:16.152312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfc70 is same with the state(6) to be set 00:22:06.361 [2024-10-01 15:19:16.153273] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.361 [2024-10-01 15:19:16.153709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:06.361 [2024-10-01 15:19:16.153731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1340a20 (9): Bad file descriptor 00:22:06.361 [2024-10-01 15:19:16.153961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.361 [2024-10-01 15:19:16.153975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13088e0 with addr=10.0.0.2, port=4420 00:22:06.361 [2024-10-01 15:19:16.153983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13088e0 is same with the state(6) to be set 00:22:06.361 [2024-10-01 15:19:16.154431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13088e0 (9): Bad file descriptor 00:22:06.361 [2024-10-01 15:19:16.154504] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.361 [2024-10-01 15:19:16.154538] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.361 [2024-10-01 15:19:16.154924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.361 [2024-10-01 15:19:16.154939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1340a20 with addr=10.0.0.2, port=4420 00:22:06.361 [2024-10-01 15:19:16.154947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1340a20 is same with the state(6) to be set 00:22:06.361 [2024-10-01 15:19:16.154956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:06.361 [2024-10-01 15:19:16.154963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:06.361 [2024-10-01 15:19:16.154971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:06.361 [2024-10-01 15:19:16.155025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.155035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.155048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.155056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.155065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.155073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.155083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.155090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.155100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.155111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.155121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.361 [2024-10-01 15:19:16.155129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.361 [2024-10-01 15:19:16.155139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.362 [2024-10-01 15:19:16.155812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.362 [2024-10-01 15:19:16.155821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.155829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.155838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.155845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.155855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.155862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.155871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.155879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.155888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.155896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.155905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.155913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.155922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.155929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.155939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.155946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.155956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.155965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.155974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.155982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.155992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.156003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.156012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.156020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.156030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.156037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.156047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.156054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.156063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.156070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.156080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.156087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.156097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.156104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.156114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.363 [2024-10-01 15:19:16.156121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.156129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e9310 is same with the state(6) to be set 00:22:06.363 [2024-10-01 15:19:16.156167] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12e9310 was disconnected and freed. reset controller. 00:22:06.363 [2024-10-01 15:19:16.156239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.363 [2024-10-01 15:19:16.156255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1340a20 (9): Bad file descriptor 00:22:06.363 [2024-10-01 15:19:16.157524] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.363 [2024-10-01 15:19:16.157540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:06.363 [2024-10-01 15:19:16.157555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1304ed0 (9): Bad file descriptor 00:22:06.363 [2024-10-01 15:19:16.157566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:06.363 [2024-10-01 15:19:16.157577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:06.363 [2024-10-01 15:19:16.157585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:06.363 [2024-10-01 15:19:16.157656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.363 [2024-10-01 15:19:16.158222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.363 [2024-10-01 15:19:16.158237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1304ed0 with addr=10.0.0.2, port=4420 00:22:06.363 [2024-10-01 15:19:16.158245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304ed0 is same with the state(6) to be set 00:22:06.363 [2024-10-01 15:19:16.158257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee4920 (9): Bad file descriptor 00:22:06.363 [2024-10-01 15:19:16.158284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.363 [2024-10-01 15:19:16.158294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.158302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.363 [2024-10-01 15:19:16.158309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.158318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.363 [2024-10-01 15:19:16.158325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.158333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.363 [2024-10-01 15:19:16.158341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.363 [2024-10-01 15:19:16.158348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b010 is same with the state(6) to be set 00:22:06.363 [2024-10-01 15:19:16.158366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1345100 (9): Bad file descriptor 00:22:06.363 [2024-10-01 15:19:16.158385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee3ef0 (9): Bad file descriptor 00:22:06.363 [2024-10-01 15:19:16.158403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee3090 (9): Bad file descriptor 00:22:06.363 [2024-10-01 15:19:16.158420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee4d80 (9): Bad file descriptor 00:22:06.363 [2024-10-01 15:19:16.158437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1309e20 (9): Bad file descriptor 00:22:06.363 [2024-10-01 15:19:16.158519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1304ed0 (9): Bad file descriptor 00:22:06.363 [2024-10-01 15:19:16.158564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:06.363 [2024-10-01 15:19:16.158572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:06.363 [2024-10-01 15:19:16.158579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:06.363 [2024-10-01 15:19:16.158620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.363 [2024-10-01 15:19:16.163317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:06.363 [2024-10-01 15:19:16.163625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.364 [2024-10-01 15:19:16.163641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13088e0 with addr=10.0.0.2, port=4420 00:22:06.364 [2024-10-01 15:19:16.163649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13088e0 is same with the state(6) to be set 00:22:06.364 [2024-10-01 15:19:16.163688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13088e0 (9): Bad file descriptor 00:22:06.364 [2024-10-01 15:19:16.163726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:06.364 [2024-10-01 15:19:16.163734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:06.364 [2024-10-01 15:19:16.163740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:06.364 [2024-10-01 15:19:16.163781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.364 [2024-10-01 15:19:16.164535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:06.364 [2024-10-01 15:19:16.164797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.364 [2024-10-01 15:19:16.164810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1340a20 with addr=10.0.0.2, port=4420 00:22:06.364 [2024-10-01 15:19:16.164817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1340a20 is same with the state(6) to be set 00:22:06.364 [2024-10-01 15:19:16.164856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1340a20 (9): Bad file descriptor 00:22:06.364 [2024-10-01 15:19:16.164894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:06.364 [2024-10-01 15:19:16.164901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:06.364 [2024-10-01 15:19:16.164908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:06.364 [2024-10-01 15:19:16.164950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.364 [2024-10-01 15:19:16.167767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:06.364 [2024-10-01 15:19:16.167993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.364 [2024-10-01 15:19:16.168009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1304ed0 with addr=10.0.0.2, port=4420 00:22:06.364 [2024-10-01 15:19:16.168017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304ed0 is same with the state(6) to be set 00:22:06.364 [2024-10-01 15:19:16.168058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1304ed0 (9): Bad file descriptor 00:22:06.364 [2024-10-01 15:19:16.168075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138b010 (9): Bad file descriptor 00:22:06.364 [2024-10-01 15:19:16.168175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:06.364 [2024-10-01 15:19:16.168183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:06.364 [2024-10-01 15:19:16.168190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:06.364 [2024-10-01 15:19:16.168230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.364 [2024-10-01 15:19:16.168731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.364 [2024-10-01 15:19:16.168738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.168981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.168991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.169003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.169013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.169021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.169030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.365 [2024-10-01 15:19:16.169038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.365 [2024-10-01 15:19:16.169047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.636 15:19:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:06.636 [2024-10-01 15:19:16.390182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.636 [2024-10-01 15:19:16.390239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.636 [2024-10-01 15:19:16.390250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.636 [2024-10-01 15:19:16.390261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.636 [2024-10-01 15:19:16.390269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.636 [2024-10-01 15:19:16.390280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.636 [2024-10-01 15:19:16.390288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.636 [2024-10-01 15:19:16.390299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.636 [2024-10-01 15:19:16.390307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.636 [2024-10-01 15:19:16.390317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.636 [2024-10-01 15:19:16.390324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.636 [2024-10-01 15:19:16.390338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.636 [2024-10-01 15:19:16.390346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.636 [2024-10-01 15:19:16.390356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.636 [2024-10-01 15:19:16.390364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.636 [2024-10-01 15:19:16.390373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.636 [2024-10-01 15:19:16.390381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.636 [2024-10-01 15:19:16.390390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.390398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.390408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.390415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.390425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.390432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.390442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.390449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.390459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.390466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.390476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.390483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.390493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.390500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.390509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.390517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.390527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.390535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.390544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13953e0 is same with the state(6) to be set 00:22:06.637 [2024-10-01 15:19:16.391916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.391933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.391950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.391960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.391973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.391982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.391993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.637 [2024-10-01 15:19:16.392473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.637 [2024-10-01 15:19:16.392481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.392981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.392988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.393002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.393010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.393019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.393028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.393038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.393045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.393053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13966f0 is same with the state(6) to be set 00:22:06.638 [2024-10-01 15:19:16.394329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.394342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.394355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.394364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.394375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.394384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.394395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.394404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.394416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.638 [2024-10-01 15:19:16.394425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.638 [2024-10-01 15:19:16.394436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.394986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.394994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.395009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.395016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.395025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.395034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.395043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.395051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.395060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.395067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.395077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.395084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.395093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.395101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.395110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.639 [2024-10-01 15:19:16.395117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.639 [2024-10-01 15:19:16.395127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.395439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.395447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1413cd0 is same with the state(6) to be set 00:22:06.640 [2024-10-01 15:19:16.396730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.396982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.396992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.397005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.397014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.397022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.397031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.397038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.397048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.397055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.397064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.397072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.640 [2024-10-01 15:19:16.397082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.640 [2024-10-01 15:19:16.397089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.641 [2024-10-01 15:19:16.397630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.641 [2024-10-01 15:19:16.397639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.397647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.397656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.397663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.397673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.397680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.397690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.397697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.397707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.397714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.397723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.397731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.397741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.397749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.397758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.397766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.397775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.397783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.397792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.397799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.397809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.397816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.397824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1415280 is same with the state(6) to be set 00:22:06.642 [2024-10-01 15:19:16.399097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.642 [2024-10-01 15:19:16.399596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.642 [2024-10-01 15:19:16.399604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.399984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.399991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.400005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.400012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.400022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.400029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.400038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.400046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.400055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.400063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.400072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.400079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.400090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.400098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.400107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.400115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.400124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.400132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.400141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.400148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.400158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.400165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.400174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.400181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.400191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.400198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.400206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ea890 is same with the state(6) to be set 00:22:06.643 [2024-10-01 15:19:16.401478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.401490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.401501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.401509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.401518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.401525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.643 [2024-10-01 15:19:16.401535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.643 [2024-10-01 15:19:16.401543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.401982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.401993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.644 [2024-10-01 15:19:16.402229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.644 [2024-10-01 15:19:16.402238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.645 [2024-10-01 15:19:16.402565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.645 [2024-10-01 15:19:16.402573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ebe10 is same with the state(6) to be set 00:22:06.645 [2024-10-01 15:19:16.403834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.645 [2024-10-01 15:19:16.403849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.645 [2024-10-01 15:19:16.403861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:06.645 [2024-10-01 15:19:16.403871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:06.645 [2024-10-01 15:19:16.403955] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.645 [2024-10-01 15:19:16.403971] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.645 [2024-10-01 15:19:16.403982] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.645 [2024-10-01 15:19:16.420965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:06.645 [2024-10-01 15:19:16.420994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:06.645 [2024-10-01 15:19:16.421008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:06.645 [2024-10-01 15:19:16.421563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.645 [2024-10-01 15:19:16.421602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee4d80 with addr=10.0.0.2, port=4420 00:22:06.645 [2024-10-01 15:19:16.421614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee4d80 is same with the state(6) to be set 00:22:06.645 [2024-10-01 15:19:16.421946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.645 [2024-10-01 15:19:16.421958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee4920 with addr=10.0.0.2, port=4420 00:22:06.645 [2024-10-01 15:19:16.421966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee4920 is same with the state(6) to be set 00:22:06.645 [2024-10-01 15:19:16.422306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.645 [2024-10-01 15:19:16.422344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee3ef0 with addr=10.0.0.2, port=4420 00:22:06.645 [2024-10-01 15:19:16.422355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3ef0 is same with the state(6) to be set 00:22:06.645 [2024-10-01 15:19:16.422384] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.645 [2024-10-01 15:19:16.422397] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.645 [2024-10-01 15:19:16.422407] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.645 [2024-10-01 15:19:16.422418] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.645 [2024-10-01 15:19:16.422435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee3ef0 (9): Bad file descriptor 00:22:06.645 [2024-10-01 15:19:16.422450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee4920 (9): Bad file descriptor 00:22:06.645 [2024-10-01 15:19:16.422463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee4d80 (9): Bad file descriptor 00:22:06.645 task offset: 28672 on job bdev=Nvme5n1 fails 00:22:06.645 1495.10 IOPS, 93.44 MiB/s [2024-10-01 15:19:16.424066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:06.645 [2024-10-01 15:19:16.424081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:06.645 [2024-10-01 15:19:16.424091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:06.645 [2024-10-01 15:19:16.424572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.645 [2024-10-01 15:19:16.424610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee3090 with addr=10.0.0.2, port=4420 00:22:06.645 [2024-10-01 15:19:16.424621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3090 is same with the state(6) to be set 00:22:06.646 [2024-10-01 15:19:16.424986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.646 [2024-10-01 15:19:16.425005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1309e20 with addr=10.0.0.2, port=4420 00:22:06.646 [2024-10-01 15:19:16.425013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309e20 is same with the state(6) to be set 00:22:06.646 [2024-10-01 15:19:16.425409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.646 [2024-10-01 15:19:16.425446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1345100 with addr=10.0.0.2, port=4420 00:22:06.646 [2024-10-01 15:19:16.425457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345100 is same with the state(6) to be set 00:22:06.646 [2024-10-01 15:19:16.425557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.425982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.425992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.426009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.426021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.426029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.426038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.426046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.426055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.426063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.426073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.426081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.426091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.426099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.426109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.426116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.426126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.426133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.426143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.426151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.426160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.426167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.426177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.426184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.426194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.426201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.646 [2024-10-01 15:19:16.426212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.646 [2024-10-01 15:19:16.426219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.647 [2024-10-01 15:19:16.426698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.647 [2024-10-01 15:19:16.426706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154ff40 is same with the state(6) to be set 00:22:06.647 00:22:06.647 Latency(us) 00:22:06.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.647 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.647 Job: Nvme1n1 ended in about 1.09 seconds with error 00:22:06.647 Verification LBA range: start 0x0 length 0x400 00:22:06.647 Nvme1n1 : 1.09 117.54 7.35 58.77 0.00 359650.70 19988.48 478849.71 00:22:06.647 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.647 Job: Nvme2n1 ended in about 1.09 seconds with error 00:22:06.647 Verification LBA range: start 0x0 length 0x400 00:22:06.647 Nvme2n1 : 1.09 117.27 7.33 58.64 0.00 354062.22 20206.93 380982.61 00:22:06.647 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.647 Job: Nvme3n1 ended in about 1.09 seconds with error 00:22:06.647 Verification LBA range: start 0x0 length 0x400 00:22:06.647 Nvme3n1 : 1.09 117.02 7.31 58.51 0.00 348530.06 19333.12 461373.44 00:22:06.647 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.647 Job: Nvme4n1 ended in about 1.10 seconds with error 00:22:06.647 Verification LBA range: start 0x0 length 0x400 00:22:06.647 Nvme4n1 : 1.10 120.41 7.53 58.38 0.00 335963.67 20316.16 438654.29 00:22:06.647 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.647 Job: Nvme5n1 ended in about 0.85 seconds with error 00:22:06.647 Verification LBA range: start 0x0 length 0x400 00:22:06.647 Nvme5n1 : 0.85 226.45 14.15 75.48 0.00 190323.63 17367.04 246415.36 00:22:06.647 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.647 Job: Nvme6n1 ended in about 0.86 seconds with error 00:22:06.647 Verification LBA range: start 0x0 length 0x400 00:22:06.647 Nvme6n1 : 0.86 149.67 9.35 74.83 0.00 249967.50 20643.84 277872.64 00:22:06.647 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.647 Job: Nvme7n1 ended in about 1.10 seconds with error 00:22:06.647 Verification LBA range: start 0x0 length 0x400 00:22:06.647 Nvme7n1 : 1.10 121.06 7.57 58.26 0.00 316497.30 16056.32 365253.97 00:22:06.647 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.647 Job: Nvme8n1 ended in about 1.10 seconds with error 00:22:06.647 Verification LBA range: start 0x0 length 0x400 00:22:06.647 Nvme8n1 : 1.10 174.39 10.90 58.13 0.00 239291.73 17367.04 332049.07 00:22:06.647 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.647 Job: Nvme9n1 ended in about 1.13 seconds with error 00:22:06.647 Verification LBA range: start 0x0 length 0x400 00:22:06.647 Nvme9n1 : 1.13 113.77 7.11 56.89 0.00 320909.37 14745.60 415935.15 00:22:06.647 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.647 Job: Nvme10n1 ended in about 0.85 seconds with error 00:22:06.647 Verification LBA range: start 0x0 length 0x400 00:22:06.647 Nvme10n1 : 0.85 223.28 13.95 75.21 0.00 168661.08 4423.68 251658.24 00:22:06.647 =================================================================================================================== 00:22:06.647 Total : 1480.87 92.55 633.10 0.00 280594.29 4423.68 478849.71 00:22:06.648 [2024-10-01 15:19:16.450934] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:06.648 [2024-10-01 15:19:16.450961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:06.648 [2024-10-01 15:19:16.451487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.648 [2024-10-01 15:19:16.451525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13088e0 with addr=10.0.0.2, port=4420 00:22:06.648 [2024-10-01 15:19:16.451536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13088e0 is same with the state(6) to be set 00:22:06.648 [2024-10-01 15:19:16.451731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.648 [2024-10-01 15:19:16.451743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1340a20 with addr=10.0.0.2, port=4420 00:22:06.648 [2024-10-01 15:19:16.451750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1340a20 is same with the state(6) to be set 00:22:06.648 [2024-10-01 15:19:16.452018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.648 [2024-10-01 15:19:16.452029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1304ed0 with addr=10.0.0.2, port=4420 00:22:06.648 [2024-10-01 15:19:16.452037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304ed0 is same with the state(6) to be set 00:22:06.648 [2024-10-01 15:19:16.452049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee3090 (9): Bad file descriptor 00:22:06.648 [2024-10-01 15:19:16.452061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1309e20 (9): Bad file descriptor 00:22:06.648 [2024-10-01 15:19:16.452070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1345100 (9): Bad file descriptor 00:22:06.648 [2024-10-01 15:19:16.452079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:06.648 [2024-10-01 15:19:16.452085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:06.648 [2024-10-01 15:19:16.452094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.648 [2024-10-01 15:19:16.452110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:06.648 [2024-10-01 15:19:16.452117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:06.648 [2024-10-01 15:19:16.452123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:06.648 [2024-10-01 15:19:16.452134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:06.648 [2024-10-01 15:19:16.452140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:06.648 [2024-10-01 15:19:16.452147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:06.648 [2024-10-01 15:19:16.452253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.648 [2024-10-01 15:19:16.452265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.648 [2024-10-01 15:19:16.452271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.648 [2024-10-01 15:19:16.452584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.648 [2024-10-01 15:19:16.452595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138b010 with addr=10.0.0.2, port=4420 00:22:06.648 [2024-10-01 15:19:16.452603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138b010 is same with the state(6) to be set 00:22:06.648 [2024-10-01 15:19:16.452613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13088e0 (9): Bad file descriptor 00:22:06.648 [2024-10-01 15:19:16.452622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1340a20 (9): Bad file descriptor 00:22:06.648 [2024-10-01 15:19:16.452636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1304ed0 (9): Bad file descriptor 00:22:06.648 [2024-10-01 15:19:16.452644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:06.648 [2024-10-01 15:19:16.452651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:06.648 [2024-10-01 15:19:16.452658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:06.648 [2024-10-01 15:19:16.452668] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:06.648 [2024-10-01 15:19:16.452675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:06.648 [2024-10-01 15:19:16.452682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:06.648 [2024-10-01 15:19:16.452692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:06.648 [2024-10-01 15:19:16.452699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:06.648 [2024-10-01 15:19:16.452706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:06.648 [2024-10-01 15:19:16.452741] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.648 [2024-10-01 15:19:16.452754] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.648 [2024-10-01 15:19:16.452764] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.648 [2024-10-01 15:19:16.452783] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.648 [2024-10-01 15:19:16.452796] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.648 [2024-10-01 15:19:16.452806] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.648 [2024-10-01 15:19:16.453108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.648 [2024-10-01 15:19:16.453119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.648 [2024-10-01 15:19:16.453127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.648 [2024-10-01 15:19:16.453145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138b010 (9): Bad file descriptor 00:22:06.648 [2024-10-01 15:19:16.453154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:06.648 [2024-10-01 15:19:16.453160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:06.648 [2024-10-01 15:19:16.453167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:06.648 [2024-10-01 15:19:16.453177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:06.648 [2024-10-01 15:19:16.453184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:06.648 [2024-10-01 15:19:16.453191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:06.648 [2024-10-01 15:19:16.453200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:06.648 [2024-10-01 15:19:16.453207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:06.648 [2024-10-01 15:19:16.453215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:06.648 [2024-10-01 15:19:16.453256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:06.648 [2024-10-01 15:19:16.453270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:06.648 [2024-10-01 15:19:16.453280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.648 [2024-10-01 15:19:16.453290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.648 [2024-10-01 15:19:16.453296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.648 [2024-10-01 15:19:16.453302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.648 [2024-10-01 15:19:16.453327] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:06.648 [2024-10-01 15:19:16.453335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:06.648 [2024-10-01 15:19:16.453342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:06.648 [2024-10-01 15:19:16.453376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.648 [2024-10-01 15:19:16.453687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.648 [2024-10-01 15:19:16.453698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee3ef0 with addr=10.0.0.2, port=4420 00:22:06.648 [2024-10-01 15:19:16.453706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3ef0 is same with the state(6) to be set 00:22:06.648 [2024-10-01 15:19:16.454005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.648 [2024-10-01 15:19:16.454015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee4920 with addr=10.0.0.2, port=4420 00:22:06.648 [2024-10-01 15:19:16.454022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee4920 is same with the state(6) to be set 00:22:06.648 [2024-10-01 15:19:16.454339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.648 [2024-10-01 15:19:16.454349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee4d80 with addr=10.0.0.2, port=4420 00:22:06.648 [2024-10-01 15:19:16.454356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee4d80 is same with the state(6) to be set 00:22:06.648 [2024-10-01 15:19:16.454384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee3ef0 (9): Bad file descriptor 00:22:06.648 [2024-10-01 15:19:16.454394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee4920 (9): Bad file descriptor 00:22:06.648 [2024-10-01 15:19:16.454404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee4d80 (9): Bad file descriptor 00:22:06.648 [2024-10-01 15:19:16.454432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:06.648 [2024-10-01 15:19:16.454439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:06.648 [2024-10-01 15:19:16.454446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:06.648 [2024-10-01 15:19:16.454456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:06.648 [2024-10-01 15:19:16.454463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:06.648 [2024-10-01 15:19:16.454469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:06.648 [2024-10-01 15:19:16.454478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:06.648 [2024-10-01 15:19:16.454485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:06.648 [2024-10-01 15:19:16.454492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.648 [2024-10-01 15:19:16.454520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.648 [2024-10-01 15:19:16.454531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.649 [2024-10-01 15:19:16.454537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 4031853 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 4031853 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 4031853 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:07.590 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:07.591 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:07.591 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:07.591 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:07.591 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:07.591 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:07.591 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:07.591 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:07.591 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:07.591 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:07.591 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:07.591 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:07.591 rmmod nvme_tcp 00:22:07.591 rmmod nvme_fabrics 00:22:07.851 rmmod nvme_keyring 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@513 -- # '[' -n 4031622 ']' 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # killprocess 4031622 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 4031622 ']' 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 4031622 00:22:07.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (4031622) - No such process 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 4031622 is not found' 00:22:07.851 Process with pid 4031622 is not found 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-save 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@787 -- # iptables-restore 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.851 15:19:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.764 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:09.764 00:22:09.764 real 0m7.919s 00:22:09.764 user 0m19.609s 00:22:09.764 sys 0m1.276s 00:22:09.764 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:09.764 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:09.764 ************************************ 00:22:09.764 END TEST nvmf_shutdown_tc3 00:22:09.764 ************************************ 00:22:09.764 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:09.764 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:09.764 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:09.764 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:09.764 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:09.764 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:10.025 ************************************ 00:22:10.025 START TEST nvmf_shutdown_tc4 00:22:10.025 ************************************ 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:10.025 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:10.026 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:10.026 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:10.026 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:10.026 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # is_hw=yes 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.026 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:22:10.287 00:22:10.287 --- 10.0.0.2 ping statistics --- 00:22:10.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.287 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:22:10.287 00:22:10.287 --- 10.0.0.1 ping statistics --- 00:22:10.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.287 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # return 0 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:10.287 15:19:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:10.287 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:10.287 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:10.287 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:10.287 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:10.287 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # nvmfpid=4033199 00:22:10.287 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # waitforlisten 4033199 00:22:10.287 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:10.287 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 4033199 ']' 00:22:10.287 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.287 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.287 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.287 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.287 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:10.287 [2024-10-01 15:19:20.095185] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:22:10.287 [2024-10-01 15:19:20.095235] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.287 [2024-10-01 15:19:20.144357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:10.548 [2024-10-01 15:19:20.200396] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.548 [2024-10-01 15:19:20.200432] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.548 [2024-10-01 15:19:20.200438] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.548 [2024-10-01 15:19:20.200442] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.548 [2024-10-01 15:19:20.200447] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.548 [2024-10-01 15:19:20.200563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.548 [2024-10-01 15:19:20.200995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.548 [2024-10-01 15:19:20.201159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.548 [2024-10-01 15:19:20.201161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:11.118 [2024-10-01 15:19:20.936653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:11.118 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.379 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:11.379 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.379 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:11.379 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.379 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:11.379 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.379 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:11.379 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.379 15:19:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:11.379 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:11.379 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.379 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:11.379 Malloc1 00:22:11.379 [2024-10-01 15:19:21.039625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.379 Malloc2 00:22:11.379 Malloc3 00:22:11.379 Malloc4 00:22:11.379 Malloc5 00:22:11.379 Malloc6 00:22:11.638 Malloc7 00:22:11.638 Malloc8 00:22:11.638 Malloc9 00:22:11.638 Malloc10 00:22:11.638 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.638 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:11.638 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.638 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:11.638 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=4033585 00:22:11.638 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:11.638 15:19:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:11.638 [2024-10-01 15:19:21.492096] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:16.928 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.928 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 4033199 00:22:16.928 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 4033199 ']' 00:22:16.928 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 4033199 00:22:16.928 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:22:16.928 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.928 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4033199 00:22:16.928 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:16.928 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:16.928 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4033199' 00:22:16.928 killing process with pid 4033199 00:22:16.928 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 4033199 00:22:16.928 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 4033199 00:22:16.928 [2024-10-01 15:19:26.517330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e25b0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.517373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e25b0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.517379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e25b0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.517385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e25b0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.517390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e25b0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.517395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e25b0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.517400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e25b0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.517404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e25b0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.517409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e25b0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.517716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e2a80 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.517743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e2a80 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.517749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e2a80 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.517754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e2a80 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.518290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e2f50 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.518314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e2f50 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.518321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e2f50 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.518327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e2f50 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.518332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e2f50 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.518349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e2f50 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.518354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e2f50 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.518359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e2f50 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.518768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e20e0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.518791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e20e0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.518796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e20e0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.518802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e20e0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.518807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e20e0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.519431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e38f0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.519448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e38f0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.519453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e38f0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.519458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e38f0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.519463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e38f0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.519668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e3dc0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.519684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e3dc0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.519689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e3dc0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.519694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e3dc0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.519699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e3dc0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.519704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e3dc0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.519709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e3dc0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.519714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e3dc0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.520042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e4290 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.520059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e4290 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.520065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e4290 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.520070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e4290 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.520474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e3420 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.520492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e3420 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.522667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e47e0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.522688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e47e0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.522693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e47e0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.522970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e4cb0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.522985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e4cb0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.522989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e4cb0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.522994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e4cb0 is same with the state(6) to be set 00:22:16.928 [2024-10-01 15:19:26.523250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3970 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.523265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3970 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.523270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3970 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.523275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3970 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.523280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3970 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.523671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ceec0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.523690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ceec0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.523695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ceec0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.523700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ceec0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244fee0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244fee0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244fee0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244fee0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244fee0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244fee0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4f60 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4f60 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4f60 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4f60 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5450 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5450 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5450 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5450 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5450 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5450 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5450 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5450 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.525841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5450 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.526166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244fa10 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.526181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244fa10 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b41e0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b41e0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b41e0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b41e0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b41e0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b41e0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b41e0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b41e0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b41e0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b46d0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b46d0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b46d0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b46d0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.527902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b46d0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.528115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4bc0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.528130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4bc0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.528135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4bc0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.528140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4bc0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.528426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3d10 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.528441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3d10 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.528447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3d10 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.528451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3d10 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.528457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3d10 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.528461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3d10 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.528467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b3d10 is same with the state(6) to be set 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 starting I/O failed: -6 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 starting I/O failed: -6 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 [2024-10-01 15:19:26.529928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5e10 is same with the state(6) to be set 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 [2024-10-01 15:19:26.529943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5e10 is same with the state(6) to be set 00:22:16.929 starting I/O failed: -6 00:22:16.929 [2024-10-01 15:19:26.529948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5e10 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.529953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5e10 is same with Write completed with error (sct=0, sc=8) 00:22:16.929 the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.529959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5e10 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.529965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5e10 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.529969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5e10 is same with the state(6) to be set 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 starting I/O failed: -6 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 starting I/O failed: -6 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 starting I/O failed: -6 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 [2024-10-01 15:19:26.530168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b62e0 is same with the state(6) to be set 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 [2024-10-01 15:19:26.530182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b62e0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.530187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b62e0 is same with the state(6) to be set 00:22:16.929 [2024-10-01 15:19:26.530192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b62e0 is same with the state(6) to be set 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 [2024-10-01 15:19:26.530197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b62e0 is same with the state(6) to be set 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 starting I/O failed: -6 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 Write completed with error (sct=0, sc=8) 00:22:16.929 [2024-10-01 15:19:26.530310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.929 [2024-10-01 15:19:26.530385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b67b0 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b67b0 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b67b0 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b67b0 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b67b0 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b67b0 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b67b0 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b67b0 is same with the state(6) to be set 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 [2024-10-01 15:19:26.530688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 [2024-10-01 15:19:26.530702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 [2024-10-01 15:19:26.530716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 [2024-10-01 15:19:26.530735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 starting I/O failed: -6 00:22:16.930 [2024-10-01 15:19:26.530739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with Write completed with error (sct=0, sc=8) 00:22:16.930 the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 starting I/O failed: -6 00:22:16.930 [2024-10-01 15:19:26.530760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 [2024-10-01 15:19:26.530765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 [2024-10-01 15:19:26.530784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 [2024-10-01 15:19:26.530789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b5940 is same with the state(6) to be set 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 [2024-10-01 15:19:26.531256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.930 Write completed with error (sct=0, sc=8) 00:22:16.930 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 [2024-10-01 15:19:26.532180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 [2024-10-01 15:19:26.533662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.931 NVMe io qpair process completion error 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 [2024-10-01 15:19:26.534812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.931 starting I/O failed: -6 00:22:16.931 starting I/O failed: -6 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 Write completed with error (sct=0, sc=8) 00:22:16.931 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 [2024-10-01 15:19:26.535789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 [2024-10-01 15:19:26.536715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.932 starting I/O failed: -6 00:22:16.932 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 [2024-10-01 15:19:26.539538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.933 NVMe io qpair process completion error 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 [2024-10-01 15:19:26.540662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.933 starting I/O failed: -6 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 [2024-10-01 15:19:26.541598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.933 starting I/O failed: -6 00:22:16.933 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 [2024-10-01 15:19:26.542526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 [2024-10-01 15:19:26.544420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.934 NVMe io qpair process completion error 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 [2024-10-01 15:19:26.545536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.934 Write completed with error (sct=0, sc=8) 00:22:16.934 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 [2024-10-01 15:19:26.546334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 [2024-10-01 15:19:26.547247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.935 Write completed with error (sct=0, sc=8) 00:22:16.935 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 [2024-10-01 15:19:26.548901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.936 NVMe io qpair process completion error 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 [2024-10-01 15:19:26.550103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.936 starting I/O failed: -6 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 [2024-10-01 15:19:26.550963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.936 starting I/O failed: -6 00:22:16.936 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 [2024-10-01 15:19:26.551908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 [2024-10-01 15:19:26.555100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.937 NVMe io qpair process completion error 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 starting I/O failed: -6 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.937 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 [2024-10-01 15:19:26.556339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 [2024-10-01 15:19:26.557168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.938 starting I/O failed: -6 00:22:16.938 starting I/O failed: -6 00:22:16.938 starting I/O failed: -6 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 [2024-10-01 15:19:26.558301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.938 starting I/O failed: -6 00:22:16.938 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 [2024-10-01 15:19:26.560201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.939 NVMe io qpair process completion error 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 [2024-10-01 15:19:26.561346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 [2024-10-01 15:19:26.562199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 Write completed with error (sct=0, sc=8) 00:22:16.939 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 [2024-10-01 15:19:26.563130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 [2024-10-01 15:19:26.564828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.940 NVMe io qpair process completion error 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 starting I/O failed: -6 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.940 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 [2024-10-01 15:19:26.565950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 [2024-10-01 15:19:26.566749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 [2024-10-01 15:19:26.567685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.941 Write completed with error (sct=0, sc=8) 00:22:16.941 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 [2024-10-01 15:19:26.569158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.942 NVMe io qpair process completion error 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.942 starting I/O failed: -6 00:22:16.942 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 [2024-10-01 15:19:26.573135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.943 NVMe io qpair process completion error 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 starting I/O failed: -6 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.943 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 [2024-10-01 15:19:26.574461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 [2024-10-01 15:19:26.575421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 [2024-10-01 15:19:26.576362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.944 starting I/O failed: -6 00:22:16.944 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 Write completed with error (sct=0, sc=8) 00:22:16.945 starting I/O failed: -6 00:22:16.945 [2024-10-01 15:19:26.579678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.945 NVMe io qpair process completion error 00:22:16.945 Initializing NVMe Controllers 00:22:16.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:16.945 Controller IO queue size 128, less than required. 00:22:16.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:16.945 Controller IO queue size 128, less than required. 00:22:16.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:16.945 Controller IO queue size 128, less than required. 00:22:16.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:16.945 Controller IO queue size 128, less than required. 00:22:16.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:16.945 Controller IO queue size 128, less than required. 00:22:16.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:16.945 Controller IO queue size 128, less than required. 00:22:16.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:16.945 Controller IO queue size 128, less than required. 00:22:16.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:16.945 Controller IO queue size 128, less than required. 00:22:16.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.945 Controller IO queue size 128, less than required. 00:22:16.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:16.945 Controller IO queue size 128, less than required. 00:22:16.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:16.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:16.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:16.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:16.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:16.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:16.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:16.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:16.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:16.945 Initialization complete. Launching workers. 00:22:16.945 ======================================================== 00:22:16.945 Latency(us) 00:22:16.945 Device Information : IOPS MiB/s Average min max 00:22:16.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1916.42 82.35 66807.40 852.61 119462.29 00:22:16.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1906.15 81.91 67188.13 793.07 119307.14 00:22:16.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1892.54 81.32 67682.69 676.03 120025.10 00:22:16.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1864.48 80.11 68726.11 606.41 120279.00 00:22:16.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1842.28 79.16 68867.43 570.34 119491.45 00:22:16.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1896.94 81.51 66903.44 691.17 118332.97 00:22:16.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1878.51 80.72 67598.13 697.64 120429.37 00:22:16.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1887.73 81.11 67291.48 815.21 119820.37 00:22:16.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1882.91 80.91 67486.24 621.13 123480.76 00:22:16.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1887.10 81.09 67382.72 825.90 120056.39 00:22:16.945 ======================================================== 00:22:16.945 Total : 18855.05 810.18 67587.04 570.34 123480.76 00:22:16.945 00:22:16.945 [2024-10-01 15:19:26.583107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119a630 is same with the state(6) to be set 00:22:16.945 [2024-10-01 15:19:26.583152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0e60 is same with the state(6) to be set 00:22:16.945 [2024-10-01 15:19:26.583183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ac90 is same with the state(6) to be set 00:22:16.945 [2024-10-01 15:19:26.583213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a14c0 is same with the state(6) to be set 00:22:16.945 [2024-10-01 15:19:26.583241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119cbb0 is same with the state(6) to be set 00:22:16.945 [2024-10-01 15:19:26.583269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1190 is same with the state(6) to be set 00:22:16.945 [2024-10-01 15:19:26.583302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119afc0 is same with the state(6) to be set 00:22:16.945 [2024-10-01 15:19:26.583330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119a960 is same with the state(6) to be set 00:22:16.945 [2024-10-01 15:19:26.583360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119c7f0 is same with the state(6) to be set 00:22:16.945 [2024-10-01 15:19:26.583388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119c9d0 is same with the state(6) to be set 00:22:16.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:16.945 15:19:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 4033585 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 4033585 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 4033585 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.332 rmmod nvme_tcp 00:22:18.332 rmmod nvme_fabrics 00:22:18.332 rmmod nvme_keyring 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@513 -- # '[' -n 4033199 ']' 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # killprocess 4033199 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 4033199 ']' 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 4033199 00:22:18.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (4033199) - No such process 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 4033199 is not found' 00:22:18.332 Process with pid 4033199 is not found 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-save 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@787 -- # iptables-restore 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.332 15:19:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.245 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:20.245 00:22:20.245 real 0m10.299s 00:22:20.245 user 0m28.039s 00:22:20.245 sys 0m3.992s 00:22:20.245 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.245 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:20.245 ************************************ 00:22:20.245 END TEST nvmf_shutdown_tc4 00:22:20.245 ************************************ 00:22:20.245 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:20.245 00:22:20.245 real 0m42.474s 00:22:20.245 user 1m43.932s 00:22:20.245 sys 0m13.421s 00:22:20.245 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.245 15:19:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:20.245 ************************************ 00:22:20.245 END TEST nvmf_shutdown 00:22:20.245 ************************************ 00:22:20.245 15:19:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:20.245 00:22:20.245 real 12m45.017s 00:22:20.245 user 26m53.715s 00:22:20.245 sys 3m42.494s 00:22:20.245 15:19:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.245 15:19:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:20.245 ************************************ 00:22:20.246 END TEST nvmf_target_extra 00:22:20.246 ************************************ 00:22:20.246 15:19:30 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:20.246 15:19:30 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:20.246 15:19:30 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:20.246 15:19:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:20.507 ************************************ 00:22:20.507 START TEST nvmf_host 00:22:20.507 ************************************ 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:20.507 * Looking for test storage... 00:22:20.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.507 15:19:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:20.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.507 --rc genhtml_branch_coverage=1 00:22:20.507 --rc genhtml_function_coverage=1 00:22:20.507 --rc genhtml_legend=1 00:22:20.507 --rc geninfo_all_blocks=1 00:22:20.507 --rc geninfo_unexecuted_blocks=1 00:22:20.507 00:22:20.507 ' 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:20.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.508 --rc genhtml_branch_coverage=1 00:22:20.508 --rc genhtml_function_coverage=1 00:22:20.508 --rc genhtml_legend=1 00:22:20.508 --rc geninfo_all_blocks=1 00:22:20.508 --rc geninfo_unexecuted_blocks=1 00:22:20.508 00:22:20.508 ' 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:20.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.508 --rc genhtml_branch_coverage=1 00:22:20.508 --rc genhtml_function_coverage=1 00:22:20.508 --rc genhtml_legend=1 00:22:20.508 --rc geninfo_all_blocks=1 00:22:20.508 --rc geninfo_unexecuted_blocks=1 00:22:20.508 00:22:20.508 ' 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:20.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.508 --rc genhtml_branch_coverage=1 00:22:20.508 --rc genhtml_function_coverage=1 00:22:20.508 --rc genhtml_legend=1 00:22:20.508 --rc geninfo_all_blocks=1 00:22:20.508 --rc geninfo_unexecuted_blocks=1 00:22:20.508 00:22:20.508 ' 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:20.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:20.508 15:19:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.770 ************************************ 00:22:20.770 START TEST nvmf_multicontroller 00:22:20.770 ************************************ 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:20.770 * Looking for test storage... 00:22:20.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:20.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.770 --rc genhtml_branch_coverage=1 00:22:20.770 --rc genhtml_function_coverage=1 00:22:20.770 --rc genhtml_legend=1 00:22:20.770 --rc geninfo_all_blocks=1 00:22:20.770 --rc geninfo_unexecuted_blocks=1 00:22:20.770 00:22:20.770 ' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:20.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.770 --rc genhtml_branch_coverage=1 00:22:20.770 --rc genhtml_function_coverage=1 00:22:20.770 --rc genhtml_legend=1 00:22:20.770 --rc geninfo_all_blocks=1 00:22:20.770 --rc geninfo_unexecuted_blocks=1 00:22:20.770 00:22:20.770 ' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:20.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.770 --rc genhtml_branch_coverage=1 00:22:20.770 --rc genhtml_function_coverage=1 00:22:20.770 --rc genhtml_legend=1 00:22:20.770 --rc geninfo_all_blocks=1 00:22:20.770 --rc geninfo_unexecuted_blocks=1 00:22:20.770 00:22:20.770 ' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:20.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.770 --rc genhtml_branch_coverage=1 00:22:20.770 --rc genhtml_function_coverage=1 00:22:20.770 --rc genhtml_legend=1 00:22:20.770 --rc geninfo_all_blocks=1 00:22:20.770 --rc geninfo_unexecuted_blocks=1 00:22:20.770 00:22:20.770 ' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:20.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:20.770 15:19:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:28.902 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:28.902 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:28.903 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:28.903 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:28.903 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # is_hw=yes 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:28.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:22:28.903 00:22:28.903 --- 10.0.0.2 ping statistics --- 00:22:28.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.903 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:22:28.903 00:22:28.903 --- 10.0.0.1 ping statistics --- 00:22:28.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.903 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # return 0 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # nvmfpid=4038994 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # waitforlisten 4038994 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 4038994 ']' 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.903 15:19:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.903 [2024-10-01 15:19:37.709942] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:22:28.904 [2024-10-01 15:19:37.710017] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.904 [2024-10-01 15:19:37.795370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:28.904 [2024-10-01 15:19:37.871763] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.904 [2024-10-01 15:19:37.871811] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.904 [2024-10-01 15:19:37.871819] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.904 [2024-10-01 15:19:37.871826] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.904 [2024-10-01 15:19:37.871832] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.904 [2024-10-01 15:19:37.871972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.904 [2024-10-01 15:19:37.872135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.904 [2024-10-01 15:19:37.872272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.904 [2024-10-01 15:19:38.553782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.904 Malloc0 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.904 [2024-10-01 15:19:38.620603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.904 [2024-10-01 15:19:38.632532] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.904 Malloc1 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4039344 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4039344 /var/tmp/bdevperf.sock 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 4039344 ']' 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.904 15:19:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.845 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:29.845 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:29.845 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:29.845 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.845 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.105 NVMe0n1 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.105 1 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.105 request: 00:22:30.105 { 00:22:30.105 "name": "NVMe0", 00:22:30.105 "trtype": "tcp", 00:22:30.105 "traddr": "10.0.0.2", 00:22:30.105 "adrfam": "ipv4", 00:22:30.105 "trsvcid": "4420", 00:22:30.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.105 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:30.105 "hostaddr": "10.0.0.1", 00:22:30.105 "prchk_reftag": false, 00:22:30.105 "prchk_guard": false, 00:22:30.105 "hdgst": false, 00:22:30.105 "ddgst": false, 00:22:30.105 "allow_unrecognized_csi": false, 00:22:30.105 "method": "bdev_nvme_attach_controller", 00:22:30.105 "req_id": 1 00:22:30.105 } 00:22:30.105 Got JSON-RPC error response 00:22:30.105 response: 00:22:30.105 { 00:22:30.105 "code": -114, 00:22:30.105 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:30.105 } 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.105 request: 00:22:30.105 { 00:22:30.105 "name": "NVMe0", 00:22:30.105 "trtype": "tcp", 00:22:30.105 "traddr": "10.0.0.2", 00:22:30.105 "adrfam": "ipv4", 00:22:30.105 "trsvcid": "4420", 00:22:30.105 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:30.105 "hostaddr": "10.0.0.1", 00:22:30.105 "prchk_reftag": false, 00:22:30.105 "prchk_guard": false, 00:22:30.105 "hdgst": false, 00:22:30.105 "ddgst": false, 00:22:30.105 "allow_unrecognized_csi": false, 00:22:30.105 "method": "bdev_nvme_attach_controller", 00:22:30.105 "req_id": 1 00:22:30.105 } 00:22:30.105 Got JSON-RPC error response 00:22:30.105 response: 00:22:30.105 { 00:22:30.105 "code": -114, 00:22:30.105 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:30.105 } 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.105 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.106 request: 00:22:30.106 { 00:22:30.106 "name": "NVMe0", 00:22:30.106 "trtype": "tcp", 00:22:30.106 "traddr": "10.0.0.2", 00:22:30.106 "adrfam": "ipv4", 00:22:30.106 "trsvcid": "4420", 00:22:30.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.106 "hostaddr": "10.0.0.1", 00:22:30.106 "prchk_reftag": false, 00:22:30.106 "prchk_guard": false, 00:22:30.106 "hdgst": false, 00:22:30.106 "ddgst": false, 00:22:30.106 "multipath": "disable", 00:22:30.106 "allow_unrecognized_csi": false, 00:22:30.106 "method": "bdev_nvme_attach_controller", 00:22:30.106 "req_id": 1 00:22:30.106 } 00:22:30.106 Got JSON-RPC error response 00:22:30.106 response: 00:22:30.106 { 00:22:30.106 "code": -114, 00:22:30.106 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:30.106 } 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.106 request: 00:22:30.106 { 00:22:30.106 "name": "NVMe0", 00:22:30.106 "trtype": "tcp", 00:22:30.106 "traddr": "10.0.0.2", 00:22:30.106 "adrfam": "ipv4", 00:22:30.106 "trsvcid": "4420", 00:22:30.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.106 "hostaddr": "10.0.0.1", 00:22:30.106 "prchk_reftag": false, 00:22:30.106 "prchk_guard": false, 00:22:30.106 "hdgst": false, 00:22:30.106 "ddgst": false, 00:22:30.106 "multipath": "failover", 00:22:30.106 "allow_unrecognized_csi": false, 00:22:30.106 "method": "bdev_nvme_attach_controller", 00:22:30.106 "req_id": 1 00:22:30.106 } 00:22:30.106 Got JSON-RPC error response 00:22:30.106 response: 00:22:30.106 { 00:22:30.106 "code": -114, 00:22:30.106 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:30.106 } 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.106 15:19:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.366 00:22:30.366 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.366 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:30.366 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.366 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.366 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.366 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:30.366 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.366 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.627 00:22:30.627 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.627 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:30.627 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:30.627 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.627 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:30.627 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.627 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:30.627 15:19:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:31.569 { 00:22:31.569 "results": [ 00:22:31.569 { 00:22:31.569 "job": "NVMe0n1", 00:22:31.569 "core_mask": "0x1", 00:22:31.569 "workload": "write", 00:22:31.569 "status": "finished", 00:22:31.569 "queue_depth": 128, 00:22:31.569 "io_size": 4096, 00:22:31.569 "runtime": 1.007693, 00:22:31.569 "iops": 28626.774225880305, 00:22:31.569 "mibps": 111.82333681984494, 00:22:31.569 "io_failed": 0, 00:22:31.569 "io_timeout": 0, 00:22:31.569 "avg_latency_us": 4462.805956020846, 00:22:31.569 "min_latency_us": 2116.266666666667, 00:22:31.569 "max_latency_us": 13271.04 00:22:31.569 } 00:22:31.569 ], 00:22:31.569 "core_count": 1 00:22:31.569 } 00:22:31.569 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:31.569 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.569 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.569 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.569 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:31.569 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 4039344 00:22:31.569 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 4039344 ']' 00:22:31.569 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 4039344 00:22:31.569 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:31.569 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.569 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4039344 00:22:31.830 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:31.830 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:31.830 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4039344' 00:22:31.830 killing process with pid 4039344 00:22:31.830 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 4039344 00:22:31.830 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 4039344 00:22:31.830 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.830 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:22:31.831 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:31.831 [2024-10-01 15:19:38.754338] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:22:31.831 [2024-10-01 15:19:38.754398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039344 ] 00:22:31.831 [2024-10-01 15:19:38.814897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.831 [2024-10-01 15:19:38.879864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.831 [2024-10-01 15:19:40.251696] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 84a75e92-b213-40c9-a421-5275ec381e47 already exists 00:22:31.831 [2024-10-01 15:19:40.251727] bdev.c:7837:bdev_register: *ERROR*: Unable to add uuid:84a75e92-b213-40c9-a421-5275ec381e47 alias for bdev NVMe1n1 00:22:31.831 [2024-10-01 15:19:40.251737] bdev_nvme.c:4481:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:31.831 Running I/O for 1 seconds... 00:22:31.831 28609.00 IOPS, 111.75 MiB/s 00:22:31.831 Latency(us) 00:22:31.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.831 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:31.831 NVMe0n1 : 1.01 28626.77 111.82 0.00 0.00 4462.81 2116.27 13271.04 00:22:31.831 =================================================================================================================== 00:22:31.831 Total : 28626.77 111.82 0.00 0.00 4462.81 2116.27 13271.04 00:22:31.831 Received shutdown signal, test time was about 1.000000 seconds 00:22:31.831 00:22:31.831 Latency(us) 00:22:31.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.831 =================================================================================================================== 00:22:31.831 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.831 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.831 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.831 rmmod nvme_tcp 00:22:32.091 rmmod nvme_fabrics 00:22:32.091 rmmod nvme_keyring 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@513 -- # '[' -n 4038994 ']' 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # killprocess 4038994 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 4038994 ']' 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 4038994 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4038994 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4038994' 00:22:32.092 killing process with pid 4038994 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 4038994 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 4038994 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-save 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:32.092 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@787 -- # iptables-restore 00:22:32.352 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.352 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.352 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.352 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.352 15:19:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.264 15:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.264 00:22:34.264 real 0m13.643s 00:22:34.264 user 0m17.611s 00:22:34.264 sys 0m5.995s 00:22:34.264 15:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.264 15:19:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:34.264 ************************************ 00:22:34.264 END TEST nvmf_multicontroller 00:22:34.264 ************************************ 00:22:34.264 15:19:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:34.264 15:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:34.264 15:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:34.264 15:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.264 ************************************ 00:22:34.264 START TEST nvmf_aer 00:22:34.264 ************************************ 00:22:34.264 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:34.525 * Looking for test storage... 00:22:34.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:34.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.525 --rc genhtml_branch_coverage=1 00:22:34.525 --rc genhtml_function_coverage=1 00:22:34.525 --rc genhtml_legend=1 00:22:34.525 --rc geninfo_all_blocks=1 00:22:34.525 --rc geninfo_unexecuted_blocks=1 00:22:34.525 00:22:34.525 ' 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:34.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.525 --rc genhtml_branch_coverage=1 00:22:34.525 --rc genhtml_function_coverage=1 00:22:34.525 --rc genhtml_legend=1 00:22:34.525 --rc geninfo_all_blocks=1 00:22:34.525 --rc geninfo_unexecuted_blocks=1 00:22:34.525 00:22:34.525 ' 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:34.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.525 --rc genhtml_branch_coverage=1 00:22:34.525 --rc genhtml_function_coverage=1 00:22:34.525 --rc genhtml_legend=1 00:22:34.525 --rc geninfo_all_blocks=1 00:22:34.525 --rc geninfo_unexecuted_blocks=1 00:22:34.525 00:22:34.525 ' 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:34.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.525 --rc genhtml_branch_coverage=1 00:22:34.525 --rc genhtml_function_coverage=1 00:22:34.525 --rc genhtml_legend=1 00:22:34.525 --rc geninfo_all_blocks=1 00:22:34.525 --rc geninfo_unexecuted_blocks=1 00:22:34.525 00:22:34.525 ' 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:34.525 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:34.526 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:34.526 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.526 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.526 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.526 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:34.526 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:34.526 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.526 15:19:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.661 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:42.662 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:42.662 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:42.662 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:42.662 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # is_hw=yes 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:22:42.662 00:22:42.662 --- 10.0.0.2 ping statistics --- 00:22:42.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.662 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:22:42.662 00:22:42.662 --- 10.0.0.1 ping statistics --- 00:22:42.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.662 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # return 0 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # nvmfpid=4044028 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # waitforlisten 4044028 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 4044028 ']' 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:42.662 15:19:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:42.662 [2024-10-01 15:19:51.886737] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:22:42.663 [2024-10-01 15:19:51.886808] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.663 [2024-10-01 15:19:51.961487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.663 [2024-10-01 15:19:52.036600] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.663 [2024-10-01 15:19:52.036645] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.663 [2024-10-01 15:19:52.036653] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.663 [2024-10-01 15:19:52.036660] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.663 [2024-10-01 15:19:52.036666] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.663 [2024-10-01 15:19:52.036804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.663 [2024-10-01 15:19:52.036932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.663 [2024-10-01 15:19:52.037077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.663 [2024-10-01 15:19:52.037079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.923 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:42.924 [2024-10-01 15:19:52.745254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:42.924 Malloc0 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.924 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:43.184 [2024-10-01 15:19:52.804644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:43.184 [ 00:22:43.184 { 00:22:43.184 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:43.184 "subtype": "Discovery", 00:22:43.184 "listen_addresses": [], 00:22:43.184 "allow_any_host": true, 00:22:43.184 "hosts": [] 00:22:43.184 }, 00:22:43.184 { 00:22:43.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.184 "subtype": "NVMe", 00:22:43.184 "listen_addresses": [ 00:22:43.184 { 00:22:43.184 "trtype": "TCP", 00:22:43.184 "adrfam": "IPv4", 00:22:43.184 "traddr": "10.0.0.2", 00:22:43.184 "trsvcid": "4420" 00:22:43.184 } 00:22:43.184 ], 00:22:43.184 "allow_any_host": true, 00:22:43.184 "hosts": [], 00:22:43.184 "serial_number": "SPDK00000000000001", 00:22:43.184 "model_number": "SPDK bdev Controller", 00:22:43.184 "max_namespaces": 2, 00:22:43.184 "min_cntlid": 1, 00:22:43.184 "max_cntlid": 65519, 00:22:43.184 "namespaces": [ 00:22:43.184 { 00:22:43.184 "nsid": 1, 00:22:43.184 "bdev_name": "Malloc0", 00:22:43.184 "name": "Malloc0", 00:22:43.184 "nguid": "3763B7414530425788083810353E0C1E", 00:22:43.184 "uuid": "3763b741-4530-4257-8808-3810353e0c1e" 00:22:43.184 } 00:22:43.184 ] 00:22:43.184 } 00:22:43.184 ] 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=4044324 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:43.184 15:19:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:43.446 Malloc1 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.446 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:43.446 Asynchronous Event Request test 00:22:43.446 Attaching to 10.0.0.2 00:22:43.446 Attached to 10.0.0.2 00:22:43.446 Registering asynchronous event callbacks... 00:22:43.446 Starting namespace attribute notice tests for all controllers... 00:22:43.446 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:43.446 aer_cb - Changed Namespace 00:22:43.446 Cleaning up... 00:22:43.446 [ 00:22:43.446 { 00:22:43.446 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:43.446 "subtype": "Discovery", 00:22:43.446 "listen_addresses": [], 00:22:43.447 "allow_any_host": true, 00:22:43.447 "hosts": [] 00:22:43.447 }, 00:22:43.447 { 00:22:43.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.447 "subtype": "NVMe", 00:22:43.447 "listen_addresses": [ 00:22:43.447 { 00:22:43.447 "trtype": "TCP", 00:22:43.447 "adrfam": "IPv4", 00:22:43.447 "traddr": "10.0.0.2", 00:22:43.447 "trsvcid": "4420" 00:22:43.447 } 00:22:43.447 ], 00:22:43.447 "allow_any_host": true, 00:22:43.447 "hosts": [], 00:22:43.447 "serial_number": "SPDK00000000000001", 00:22:43.447 "model_number": "SPDK bdev Controller", 00:22:43.447 "max_namespaces": 2, 00:22:43.447 "min_cntlid": 1, 00:22:43.447 "max_cntlid": 65519, 00:22:43.447 "namespaces": [ 00:22:43.447 { 00:22:43.447 "nsid": 1, 00:22:43.447 "bdev_name": "Malloc0", 00:22:43.447 "name": "Malloc0", 00:22:43.447 "nguid": "3763B7414530425788083810353E0C1E", 00:22:43.447 "uuid": "3763b741-4530-4257-8808-3810353e0c1e" 00:22:43.447 }, 00:22:43.447 { 00:22:43.447 "nsid": 2, 00:22:43.447 "bdev_name": "Malloc1", 00:22:43.447 "name": "Malloc1", 00:22:43.447 "nguid": "5ABF4161097B4FBAB99BD360EEE6ED10", 00:22:43.447 "uuid": "5abf4161-097b-4fba-b99b-d360eee6ed10" 00:22:43.447 } 00:22:43.447 ] 00:22:43.447 } 00:22:43.447 ] 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 4044324 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:43.447 rmmod nvme_tcp 00:22:43.447 rmmod nvme_fabrics 00:22:43.447 rmmod nvme_keyring 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@513 -- # '[' -n 4044028 ']' 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # killprocess 4044028 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 4044028 ']' 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 4044028 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4044028 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4044028' 00:22:43.447 killing process with pid 4044028 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 4044028 00:22:43.447 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 4044028 00:22:43.708 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:43.708 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:43.708 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:43.708 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:43.709 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:43.709 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-save 00:22:43.709 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@787 -- # iptables-restore 00:22:43.709 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:43.709 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:43.709 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.709 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.709 15:19:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:46.253 00:22:46.253 real 0m11.401s 00:22:46.253 user 0m7.761s 00:22:46.253 sys 0m6.069s 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:46.253 ************************************ 00:22:46.253 END TEST nvmf_aer 00:22:46.253 ************************************ 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.253 ************************************ 00:22:46.253 START TEST nvmf_async_init 00:22:46.253 ************************************ 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:46.253 * Looking for test storage... 00:22:46.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:46.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.253 --rc genhtml_branch_coverage=1 00:22:46.253 --rc genhtml_function_coverage=1 00:22:46.253 --rc genhtml_legend=1 00:22:46.253 --rc geninfo_all_blocks=1 00:22:46.253 --rc geninfo_unexecuted_blocks=1 00:22:46.253 00:22:46.253 ' 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:46.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.253 --rc genhtml_branch_coverage=1 00:22:46.253 --rc genhtml_function_coverage=1 00:22:46.253 --rc genhtml_legend=1 00:22:46.253 --rc geninfo_all_blocks=1 00:22:46.253 --rc geninfo_unexecuted_blocks=1 00:22:46.253 00:22:46.253 ' 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:46.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.253 --rc genhtml_branch_coverage=1 00:22:46.253 --rc genhtml_function_coverage=1 00:22:46.253 --rc genhtml_legend=1 00:22:46.253 --rc geninfo_all_blocks=1 00:22:46.253 --rc geninfo_unexecuted_blocks=1 00:22:46.253 00:22:46.253 ' 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:46.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.253 --rc genhtml_branch_coverage=1 00:22:46.253 --rc genhtml_function_coverage=1 00:22:46.253 --rc genhtml_legend=1 00:22:46.253 --rc geninfo_all_blocks=1 00:22:46.253 --rc geninfo_unexecuted_blocks=1 00:22:46.253 00:22:46.253 ' 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.253 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6e897f95343945d09cc6156c49c55b41 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:46.254 15:19:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:22:54.390 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:54.391 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:54.391 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:54.391 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ up == up ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:54.391 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # is_hw=yes 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.391 15:20:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:54.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:22:54.391 00:22:54.391 --- 10.0.0.2 ping statistics --- 00:22:54.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.391 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:22:54.391 00:22:54.391 --- 10.0.0.1 ping statistics --- 00:22:54.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.391 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # return 0 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # nvmfpid=4048429 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # waitforlisten 4048429 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 4048429 ']' 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.391 15:20:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.391 [2024-10-01 15:20:03.265680] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:22:54.391 [2024-10-01 15:20:03.265749] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.391 [2024-10-01 15:20:03.336280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.391 [2024-10-01 15:20:03.410606] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.391 [2024-10-01 15:20:03.410647] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.391 [2024-10-01 15:20:03.410655] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.391 [2024-10-01 15:20:03.410661] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.391 [2024-10-01 15:20:03.410667] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.391 [2024-10-01 15:20:03.410686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.391 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:54.391 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:22:54.391 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:54.391 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:54.391 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.392 [2024-10-01 15:20:04.107409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.392 null0 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6e897f95343945d09cc6156c49c55b41 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.392 [2024-10-01 15:20:04.167686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.392 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.652 nvme0n1 00:22:54.652 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.652 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:54.652 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.652 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.652 [ 00:22:54.652 { 00:22:54.652 "name": "nvme0n1", 00:22:54.652 "aliases": [ 00:22:54.652 "6e897f95-3439-45d0-9cc6-156c49c55b41" 00:22:54.652 ], 00:22:54.652 "product_name": "NVMe disk", 00:22:54.652 "block_size": 512, 00:22:54.652 "num_blocks": 2097152, 00:22:54.652 "uuid": "6e897f95-3439-45d0-9cc6-156c49c55b41", 00:22:54.652 "numa_id": 0, 00:22:54.652 "assigned_rate_limits": { 00:22:54.652 "rw_ios_per_sec": 0, 00:22:54.652 "rw_mbytes_per_sec": 0, 00:22:54.652 "r_mbytes_per_sec": 0, 00:22:54.652 "w_mbytes_per_sec": 0 00:22:54.652 }, 00:22:54.652 "claimed": false, 00:22:54.652 "zoned": false, 00:22:54.652 "supported_io_types": { 00:22:54.652 "read": true, 00:22:54.652 "write": true, 00:22:54.652 "unmap": false, 00:22:54.652 "flush": true, 00:22:54.652 "reset": true, 00:22:54.652 "nvme_admin": true, 00:22:54.652 "nvme_io": true, 00:22:54.652 "nvme_io_md": false, 00:22:54.652 "write_zeroes": true, 00:22:54.652 "zcopy": false, 00:22:54.652 "get_zone_info": false, 00:22:54.652 "zone_management": false, 00:22:54.652 "zone_append": false, 00:22:54.652 "compare": true, 00:22:54.652 "compare_and_write": true, 00:22:54.652 "abort": true, 00:22:54.652 "seek_hole": false, 00:22:54.652 "seek_data": false, 00:22:54.652 "copy": true, 00:22:54.652 "nvme_iov_md": false 00:22:54.652 }, 00:22:54.652 "memory_domains": [ 00:22:54.652 { 00:22:54.652 "dma_device_id": "system", 00:22:54.652 "dma_device_type": 1 00:22:54.652 } 00:22:54.652 ], 00:22:54.652 "driver_specific": { 00:22:54.652 "nvme": [ 00:22:54.652 { 00:22:54.652 "trid": { 00:22:54.652 "trtype": "TCP", 00:22:54.652 "adrfam": "IPv4", 00:22:54.652 "traddr": "10.0.0.2", 00:22:54.652 "trsvcid": "4420", 00:22:54.652 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:54.652 }, 00:22:54.652 "ctrlr_data": { 00:22:54.652 "cntlid": 1, 00:22:54.652 "vendor_id": "0x8086", 00:22:54.652 "model_number": "SPDK bdev Controller", 00:22:54.652 "serial_number": "00000000000000000000", 00:22:54.652 "firmware_revision": "25.01", 00:22:54.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:54.652 "oacs": { 00:22:54.652 "security": 0, 00:22:54.652 "format": 0, 00:22:54.652 "firmware": 0, 00:22:54.652 "ns_manage": 0 00:22:54.652 }, 00:22:54.652 "multi_ctrlr": true, 00:22:54.652 "ana_reporting": false 00:22:54.652 }, 00:22:54.652 "vs": { 00:22:54.652 "nvme_version": "1.3" 00:22:54.652 }, 00:22:54.652 "ns_data": { 00:22:54.652 "id": 1, 00:22:54.652 "can_share": true 00:22:54.652 } 00:22:54.652 } 00:22:54.652 ], 00:22:54.652 "mp_policy": "active_passive" 00:22:54.652 } 00:22:54.652 } 00:22:54.652 ] 00:22:54.652 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.652 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:54.652 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.652 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.652 [2024-10-01 15:20:04.441907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:54.652 [2024-10-01 15:20:04.441969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb44700 (9): Bad file descriptor 00:22:54.927 [2024-10-01 15:20:04.574093] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:54.927 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.927 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:54.927 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.927 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.927 [ 00:22:54.927 { 00:22:54.927 "name": "nvme0n1", 00:22:54.927 "aliases": [ 00:22:54.927 "6e897f95-3439-45d0-9cc6-156c49c55b41" 00:22:54.927 ], 00:22:54.927 "product_name": "NVMe disk", 00:22:54.927 "block_size": 512, 00:22:54.927 "num_blocks": 2097152, 00:22:54.927 "uuid": "6e897f95-3439-45d0-9cc6-156c49c55b41", 00:22:54.927 "numa_id": 0, 00:22:54.927 "assigned_rate_limits": { 00:22:54.927 "rw_ios_per_sec": 0, 00:22:54.927 "rw_mbytes_per_sec": 0, 00:22:54.927 "r_mbytes_per_sec": 0, 00:22:54.927 "w_mbytes_per_sec": 0 00:22:54.927 }, 00:22:54.927 "claimed": false, 00:22:54.927 "zoned": false, 00:22:54.927 "supported_io_types": { 00:22:54.927 "read": true, 00:22:54.927 "write": true, 00:22:54.927 "unmap": false, 00:22:54.927 "flush": true, 00:22:54.927 "reset": true, 00:22:54.927 "nvme_admin": true, 00:22:54.927 "nvme_io": true, 00:22:54.927 "nvme_io_md": false, 00:22:54.927 "write_zeroes": true, 00:22:54.927 "zcopy": false, 00:22:54.927 "get_zone_info": false, 00:22:54.927 "zone_management": false, 00:22:54.927 "zone_append": false, 00:22:54.927 "compare": true, 00:22:54.927 "compare_and_write": true, 00:22:54.927 "abort": true, 00:22:54.927 "seek_hole": false, 00:22:54.927 "seek_data": false, 00:22:54.927 "copy": true, 00:22:54.927 "nvme_iov_md": false 00:22:54.927 }, 00:22:54.927 "memory_domains": [ 00:22:54.927 { 00:22:54.927 "dma_device_id": "system", 00:22:54.927 "dma_device_type": 1 00:22:54.927 } 00:22:54.927 ], 00:22:54.927 "driver_specific": { 00:22:54.927 "nvme": [ 00:22:54.927 { 00:22:54.927 "trid": { 00:22:54.927 "trtype": "TCP", 00:22:54.927 "adrfam": "IPv4", 00:22:54.927 "traddr": "10.0.0.2", 00:22:54.927 "trsvcid": "4420", 00:22:54.927 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:54.927 }, 00:22:54.927 "ctrlr_data": { 00:22:54.927 "cntlid": 2, 00:22:54.927 "vendor_id": "0x8086", 00:22:54.927 "model_number": "SPDK bdev Controller", 00:22:54.927 "serial_number": "00000000000000000000", 00:22:54.927 "firmware_revision": "25.01", 00:22:54.927 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:54.927 "oacs": { 00:22:54.927 "security": 0, 00:22:54.927 "format": 0, 00:22:54.927 "firmware": 0, 00:22:54.927 "ns_manage": 0 00:22:54.927 }, 00:22:54.928 "multi_ctrlr": true, 00:22:54.928 "ana_reporting": false 00:22:54.928 }, 00:22:54.928 "vs": { 00:22:54.928 "nvme_version": "1.3" 00:22:54.928 }, 00:22:54.928 "ns_data": { 00:22:54.928 "id": 1, 00:22:54.928 "can_share": true 00:22:54.928 } 00:22:54.928 } 00:22:54.928 ], 00:22:54.928 "mp_policy": "active_passive" 00:22:54.928 } 00:22:54.928 } 00:22:54.928 ] 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.QwEoQRHVyP 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.QwEoQRHVyP 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.QwEoQRHVyP 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.928 [2024-10-01 15:20:04.662589] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:54.928 [2024-10-01 15:20:04.662701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.928 [2024-10-01 15:20:04.686670] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.928 nvme0n1 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:54.928 [ 00:22:54.928 { 00:22:54.928 "name": "nvme0n1", 00:22:54.928 "aliases": [ 00:22:54.928 "6e897f95-3439-45d0-9cc6-156c49c55b41" 00:22:54.928 ], 00:22:54.928 "product_name": "NVMe disk", 00:22:54.928 "block_size": 512, 00:22:54.928 "num_blocks": 2097152, 00:22:54.928 "uuid": "6e897f95-3439-45d0-9cc6-156c49c55b41", 00:22:54.928 "numa_id": 0, 00:22:54.928 "assigned_rate_limits": { 00:22:54.928 "rw_ios_per_sec": 0, 00:22:54.928 "rw_mbytes_per_sec": 0, 00:22:54.928 "r_mbytes_per_sec": 0, 00:22:54.928 "w_mbytes_per_sec": 0 00:22:54.928 }, 00:22:54.928 "claimed": false, 00:22:54.928 "zoned": false, 00:22:54.928 "supported_io_types": { 00:22:54.928 "read": true, 00:22:54.928 "write": true, 00:22:54.928 "unmap": false, 00:22:54.928 "flush": true, 00:22:54.928 "reset": true, 00:22:54.928 "nvme_admin": true, 00:22:54.928 "nvme_io": true, 00:22:54.928 "nvme_io_md": false, 00:22:54.928 "write_zeroes": true, 00:22:54.928 "zcopy": false, 00:22:54.928 "get_zone_info": false, 00:22:54.928 "zone_management": false, 00:22:54.928 "zone_append": false, 00:22:54.928 "compare": true, 00:22:54.928 "compare_and_write": true, 00:22:54.928 "abort": true, 00:22:54.928 "seek_hole": false, 00:22:54.928 "seek_data": false, 00:22:54.928 "copy": true, 00:22:54.928 "nvme_iov_md": false 00:22:54.928 }, 00:22:54.928 "memory_domains": [ 00:22:54.928 { 00:22:54.928 "dma_device_id": "system", 00:22:54.928 "dma_device_type": 1 00:22:54.928 } 00:22:54.928 ], 00:22:54.928 "driver_specific": { 00:22:54.928 "nvme": [ 00:22:54.928 { 00:22:54.928 "trid": { 00:22:54.928 "trtype": "TCP", 00:22:54.928 "adrfam": "IPv4", 00:22:54.928 "traddr": "10.0.0.2", 00:22:54.928 "trsvcid": "4421", 00:22:54.928 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:54.928 }, 00:22:54.928 "ctrlr_data": { 00:22:54.928 "cntlid": 3, 00:22:54.928 "vendor_id": "0x8086", 00:22:54.928 "model_number": "SPDK bdev Controller", 00:22:54.928 "serial_number": "00000000000000000000", 00:22:54.928 "firmware_revision": "25.01", 00:22:54.928 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:54.928 "oacs": { 00:22:54.928 "security": 0, 00:22:54.928 "format": 0, 00:22:54.928 "firmware": 0, 00:22:54.928 "ns_manage": 0 00:22:54.928 }, 00:22:54.928 "multi_ctrlr": true, 00:22:54.928 "ana_reporting": false 00:22:54.928 }, 00:22:54.928 "vs": { 00:22:54.928 "nvme_version": "1.3" 00:22:54.928 }, 00:22:54.928 "ns_data": { 00:22:54.928 "id": 1, 00:22:54.928 "can_share": true 00:22:54.928 } 00:22:54.928 } 00:22:54.928 ], 00:22:54.928 "mp_policy": "active_passive" 00:22:54.928 } 00:22:54.928 } 00:22:54.928 ] 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.928 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.QwEoQRHVyP 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:55.264 rmmod nvme_tcp 00:22:55.264 rmmod nvme_fabrics 00:22:55.264 rmmod nvme_keyring 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@513 -- # '[' -n 4048429 ']' 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # killprocess 4048429 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 4048429 ']' 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 4048429 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4048429 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4048429' 00:22:55.264 killing process with pid 4048429 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 4048429 00:22:55.264 15:20:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 4048429 00:22:55.264 15:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:55.264 15:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:55.264 15:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:55.264 15:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:55.264 15:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-save 00:22:55.264 15:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:55.264 15:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@787 -- # iptables-restore 00:22:55.532 15:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:55.532 15:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:55.532 15:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.532 15:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.532 15:20:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.443 15:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.443 00:22:57.443 real 0m11.561s 00:22:57.443 user 0m4.100s 00:22:57.443 sys 0m6.011s 00:22:57.443 15:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:57.443 15:20:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:57.443 ************************************ 00:22:57.443 END TEST nvmf_async_init 00:22:57.443 ************************************ 00:22:57.443 15:20:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:57.443 15:20:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:57.443 15:20:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:57.443 15:20:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.443 ************************************ 00:22:57.443 START TEST dma 00:22:57.443 ************************************ 00:22:57.443 15:20:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:57.709 * Looking for test storage... 00:22:57.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:57.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.709 --rc genhtml_branch_coverage=1 00:22:57.709 --rc genhtml_function_coverage=1 00:22:57.709 --rc genhtml_legend=1 00:22:57.709 --rc geninfo_all_blocks=1 00:22:57.709 --rc geninfo_unexecuted_blocks=1 00:22:57.709 00:22:57.709 ' 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:57.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.709 --rc genhtml_branch_coverage=1 00:22:57.709 --rc genhtml_function_coverage=1 00:22:57.709 --rc genhtml_legend=1 00:22:57.709 --rc geninfo_all_blocks=1 00:22:57.709 --rc geninfo_unexecuted_blocks=1 00:22:57.709 00:22:57.709 ' 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:57.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.709 --rc genhtml_branch_coverage=1 00:22:57.709 --rc genhtml_function_coverage=1 00:22:57.709 --rc genhtml_legend=1 00:22:57.709 --rc geninfo_all_blocks=1 00:22:57.709 --rc geninfo_unexecuted_blocks=1 00:22:57.709 00:22:57.709 ' 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:57.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.709 --rc genhtml_branch_coverage=1 00:22:57.709 --rc genhtml_function_coverage=1 00:22:57.709 --rc genhtml_legend=1 00:22:57.709 --rc geninfo_all_blocks=1 00:22:57.709 --rc geninfo_unexecuted_blocks=1 00:22:57.709 00:22:57.709 ' 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:57.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:57.709 00:22:57.709 real 0m0.239s 00:22:57.709 user 0m0.145s 00:22:57.709 sys 0m0.110s 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:57.709 ************************************ 00:22:57.709 END TEST dma 00:22:57.709 ************************************ 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.709 ************************************ 00:22:57.709 START TEST nvmf_identify 00:22:57.709 ************************************ 00:22:57.709 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:57.970 * Looking for test storage... 00:22:57.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:57.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.970 --rc genhtml_branch_coverage=1 00:22:57.970 --rc genhtml_function_coverage=1 00:22:57.970 --rc genhtml_legend=1 00:22:57.970 --rc geninfo_all_blocks=1 00:22:57.970 --rc geninfo_unexecuted_blocks=1 00:22:57.970 00:22:57.970 ' 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:57.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.970 --rc genhtml_branch_coverage=1 00:22:57.970 --rc genhtml_function_coverage=1 00:22:57.970 --rc genhtml_legend=1 00:22:57.970 --rc geninfo_all_blocks=1 00:22:57.970 --rc geninfo_unexecuted_blocks=1 00:22:57.970 00:22:57.970 ' 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:57.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.970 --rc genhtml_branch_coverage=1 00:22:57.970 --rc genhtml_function_coverage=1 00:22:57.970 --rc genhtml_legend=1 00:22:57.970 --rc geninfo_all_blocks=1 00:22:57.970 --rc geninfo_unexecuted_blocks=1 00:22:57.970 00:22:57.970 ' 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:57.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.970 --rc genhtml_branch_coverage=1 00:22:57.970 --rc genhtml_function_coverage=1 00:22:57.970 --rc genhtml_legend=1 00:22:57.970 --rc geninfo_all_blocks=1 00:22:57.970 --rc geninfo_unexecuted_blocks=1 00:22:57.970 00:22:57.970 ' 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.970 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:57.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.971 15:20:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:06.107 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:06.107 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:06.107 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.107 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:06.108 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # is_hw=yes 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:06.108 15:20:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:06.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:23:06.108 00:23:06.108 --- 10.0.0.2 ping statistics --- 00:23:06.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.108 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:23:06.108 00:23:06.108 --- 10.0.0.1 ping statistics --- 00:23:06.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.108 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # return 0 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4053123 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4053123 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 4053123 ']' 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:06.108 15:20:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:06.108 [2024-10-01 15:20:15.206043] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:23:06.108 [2024-10-01 15:20:15.206108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.108 [2024-10-01 15:20:15.277241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:06.108 [2024-10-01 15:20:15.352736] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.108 [2024-10-01 15:20:15.352777] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.108 [2024-10-01 15:20:15.352785] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.108 [2024-10-01 15:20:15.352791] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.108 [2024-10-01 15:20:15.352797] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.108 [2024-10-01 15:20:15.352937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.108 [2024-10-01 15:20:15.353136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.108 [2024-10-01 15:20:15.353136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.108 [2024-10-01 15:20:15.353037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:06.368 [2024-10-01 15:20:16.023122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:06.368 Malloc0 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:06.368 [2024-10-01 15:20:16.122595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:06.368 [ 00:23:06.368 { 00:23:06.368 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:06.368 "subtype": "Discovery", 00:23:06.368 "listen_addresses": [ 00:23:06.368 { 00:23:06.368 "trtype": "TCP", 00:23:06.368 "adrfam": "IPv4", 00:23:06.368 "traddr": "10.0.0.2", 00:23:06.368 "trsvcid": "4420" 00:23:06.368 } 00:23:06.368 ], 00:23:06.368 "allow_any_host": true, 00:23:06.368 "hosts": [] 00:23:06.368 }, 00:23:06.368 { 00:23:06.368 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.368 "subtype": "NVMe", 00:23:06.368 "listen_addresses": [ 00:23:06.368 { 00:23:06.368 "trtype": "TCP", 00:23:06.368 "adrfam": "IPv4", 00:23:06.368 "traddr": "10.0.0.2", 00:23:06.368 "trsvcid": "4420" 00:23:06.368 } 00:23:06.368 ], 00:23:06.368 "allow_any_host": true, 00:23:06.368 "hosts": [], 00:23:06.368 "serial_number": "SPDK00000000000001", 00:23:06.368 "model_number": "SPDK bdev Controller", 00:23:06.368 "max_namespaces": 32, 00:23:06.368 "min_cntlid": 1, 00:23:06.368 "max_cntlid": 65519, 00:23:06.368 "namespaces": [ 00:23:06.368 { 00:23:06.368 "nsid": 1, 00:23:06.368 "bdev_name": "Malloc0", 00:23:06.368 "name": "Malloc0", 00:23:06.368 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:06.368 "eui64": "ABCDEF0123456789", 00:23:06.368 "uuid": "08d1eded-a9ed-44c8-b9e2-776477a55823" 00:23:06.368 } 00:23:06.368 ] 00:23:06.368 } 00:23:06.368 ] 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.368 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:06.368 [2024-10-01 15:20:16.186333] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:23:06.369 [2024-10-01 15:20:16.186382] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4053364 ] 00:23:06.369 [2024-10-01 15:20:16.220507] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:06.369 [2024-10-01 15:20:16.220559] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:06.369 [2024-10-01 15:20:16.220564] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:06.369 [2024-10-01 15:20:16.220575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:06.369 [2024-10-01 15:20:16.220582] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:06.369 [2024-10-01 15:20:16.221282] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:06.369 [2024-10-01 15:20:16.221315] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x146d760 0 00:23:06.638 [2024-10-01 15:20:16.235010] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:06.638 [2024-10-01 15:20:16.235023] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:06.638 [2024-10-01 15:20:16.235029] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:06.638 [2024-10-01 15:20:16.235032] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:06.638 [2024-10-01 15:20:16.235058] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.638 [2024-10-01 15:20:16.235064] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.638 [2024-10-01 15:20:16.235069] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x146d760) 00:23:06.638 [2024-10-01 15:20:16.235082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:06.638 [2024-10-01 15:20:16.235100] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd480, cid 0, qid 0 00:23:06.638 [2024-10-01 15:20:16.240006] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.638 [2024-10-01 15:20:16.240016] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.638 [2024-10-01 15:20:16.240019] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.638 [2024-10-01 15:20:16.240024] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd480) on tqpair=0x146d760 00:23:06.638 [2024-10-01 15:20:16.240035] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:06.638 [2024-10-01 15:20:16.240042] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:06.638 [2024-10-01 15:20:16.240047] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:06.638 [2024-10-01 15:20:16.240061] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.638 [2024-10-01 15:20:16.240065] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.638 [2024-10-01 15:20:16.240069] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x146d760) 00:23:06.638 [2024-10-01 15:20:16.240076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.638 [2024-10-01 15:20:16.240089] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd480, cid 0, qid 0 00:23:06.638 [2024-10-01 15:20:16.240253] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.638 [2024-10-01 15:20:16.240260] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.638 [2024-10-01 15:20:16.240263] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.638 [2024-10-01 15:20:16.240267] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd480) on tqpair=0x146d760 00:23:06.638 [2024-10-01 15:20:16.240273] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:06.638 [2024-10-01 15:20:16.240280] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:06.638 [2024-10-01 15:20:16.240287] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.638 [2024-10-01 15:20:16.240290] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.638 [2024-10-01 15:20:16.240294] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x146d760) 00:23:06.638 [2024-10-01 15:20:16.240301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.638 [2024-10-01 15:20:16.240311] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd480, cid 0, qid 0 00:23:06.638 [2024-10-01 15:20:16.240511] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.638 [2024-10-01 15:20:16.240521] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.638 [2024-10-01 15:20:16.240524] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.638 [2024-10-01 15:20:16.240528] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd480) on tqpair=0x146d760 00:23:06.638 [2024-10-01 15:20:16.240534] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:06.638 [2024-10-01 15:20:16.240542] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:06.638 [2024-10-01 15:20:16.240548] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.638 [2024-10-01 15:20:16.240552] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.638 [2024-10-01 15:20:16.240556] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x146d760) 00:23:06.638 [2024-10-01 15:20:16.240563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.638 [2024-10-01 15:20:16.240573] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd480, cid 0, qid 0 00:23:06.638 [2024-10-01 15:20:16.240779] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.638 [2024-10-01 15:20:16.240785] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.638 [2024-10-01 15:20:16.240789] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.240793] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd480) on tqpair=0x146d760 00:23:06.639 [2024-10-01 15:20:16.240798] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:06.639 [2024-10-01 15:20:16.240807] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.240811] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.240814] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x146d760) 00:23:06.639 [2024-10-01 15:20:16.240821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.639 [2024-10-01 15:20:16.240831] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd480, cid 0, qid 0 00:23:06.639 [2024-10-01 15:20:16.241016] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.639 [2024-10-01 15:20:16.241023] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.639 [2024-10-01 15:20:16.241026] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.241030] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd480) on tqpair=0x146d760 00:23:06.639 [2024-10-01 15:20:16.241035] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:06.639 [2024-10-01 15:20:16.241040] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:06.639 [2024-10-01 15:20:16.241048] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:06.639 [2024-10-01 15:20:16.241153] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:06.639 [2024-10-01 15:20:16.241158] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:06.639 [2024-10-01 15:20:16.241166] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.241170] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.241174] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x146d760) 00:23:06.639 [2024-10-01 15:20:16.241181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.639 [2024-10-01 15:20:16.241194] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd480, cid 0, qid 0 00:23:06.639 [2024-10-01 15:20:16.241354] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.639 [2024-10-01 15:20:16.241361] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.639 [2024-10-01 15:20:16.241364] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.241368] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd480) on tqpair=0x146d760 00:23:06.639 [2024-10-01 15:20:16.241373] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:06.639 [2024-10-01 15:20:16.241382] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.241386] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.241390] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x146d760) 00:23:06.639 [2024-10-01 15:20:16.241396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.639 [2024-10-01 15:20:16.241406] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd480, cid 0, qid 0 00:23:06.639 [2024-10-01 15:20:16.241570] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.639 [2024-10-01 15:20:16.241576] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.639 [2024-10-01 15:20:16.241580] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.241584] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd480) on tqpair=0x146d760 00:23:06.639 [2024-10-01 15:20:16.241588] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:06.639 [2024-10-01 15:20:16.241593] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:06.639 [2024-10-01 15:20:16.241601] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:06.639 [2024-10-01 15:20:16.241609] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:06.639 [2024-10-01 15:20:16.241618] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.241622] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x146d760) 00:23:06.639 [2024-10-01 15:20:16.241629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.639 [2024-10-01 15:20:16.241639] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd480, cid 0, qid 0 00:23:06.639 [2024-10-01 15:20:16.241830] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:06.639 [2024-10-01 15:20:16.241837] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:06.639 [2024-10-01 15:20:16.241840] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.241844] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x146d760): datao=0, datal=4096, cccid=0 00:23:06.639 [2024-10-01 15:20:16.241849] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cd480) on tqpair(0x146d760): expected_datao=0, payload_size=4096 00:23:06.639 [2024-10-01 15:20:16.241854] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.241868] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.241873] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.282134] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.639 [2024-10-01 15:20:16.282144] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.639 [2024-10-01 15:20:16.282150] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.282154] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd480) on tqpair=0x146d760 00:23:06.639 [2024-10-01 15:20:16.282162] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:06.639 [2024-10-01 15:20:16.282167] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:06.639 [2024-10-01 15:20:16.282172] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:06.639 [2024-10-01 15:20:16.282178] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:06.639 [2024-10-01 15:20:16.282182] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:06.639 [2024-10-01 15:20:16.282187] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:06.639 [2024-10-01 15:20:16.282196] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:06.639 [2024-10-01 15:20:16.282202] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.282207] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.282210] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x146d760) 00:23:06.639 [2024-10-01 15:20:16.282217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:06.639 [2024-10-01 15:20:16.282229] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd480, cid 0, qid 0 00:23:06.639 [2024-10-01 15:20:16.282430] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.639 [2024-10-01 15:20:16.282437] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.639 [2024-10-01 15:20:16.282440] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.282444] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd480) on tqpair=0x146d760 00:23:06.639 [2024-10-01 15:20:16.282453] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.282457] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.282461] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x146d760) 00:23:06.639 [2024-10-01 15:20:16.282467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.639 [2024-10-01 15:20:16.282473] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.282477] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.282481] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x146d760) 00:23:06.639 [2024-10-01 15:20:16.282487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.639 [2024-10-01 15:20:16.282493] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.282497] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.282500] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x146d760) 00:23:06.639 [2024-10-01 15:20:16.282506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.639 [2024-10-01 15:20:16.282512] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.282516] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.639 [2024-10-01 15:20:16.282520] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.639 [2024-10-01 15:20:16.282526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.639 [2024-10-01 15:20:16.282533] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:06.639 [2024-10-01 15:20:16.282544] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:06.639 [2024-10-01 15:20:16.282551] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.282554] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x146d760) 00:23:06.640 [2024-10-01 15:20:16.282561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.640 [2024-10-01 15:20:16.282573] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd480, cid 0, qid 0 00:23:06.640 [2024-10-01 15:20:16.282578] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd600, cid 1, qid 0 00:23:06.640 [2024-10-01 15:20:16.282583] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd780, cid 2, qid 0 00:23:06.640 [2024-10-01 15:20:16.282588] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.640 [2024-10-01 15:20:16.282593] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cda80, cid 4, qid 0 00:23:06.640 [2024-10-01 15:20:16.282818] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.640 [2024-10-01 15:20:16.282824] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.640 [2024-10-01 15:20:16.282827] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.282831] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cda80) on tqpair=0x146d760 00:23:06.640 [2024-10-01 15:20:16.282837] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:06.640 [2024-10-01 15:20:16.282842] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:06.640 [2024-10-01 15:20:16.282852] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.282856] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x146d760) 00:23:06.640 [2024-10-01 15:20:16.282863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.640 [2024-10-01 15:20:16.282873] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cda80, cid 4, qid 0 00:23:06.640 [2024-10-01 15:20:16.283076] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:06.640 [2024-10-01 15:20:16.283082] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:06.640 [2024-10-01 15:20:16.283086] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.283090] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x146d760): datao=0, datal=4096, cccid=4 00:23:06.640 [2024-10-01 15:20:16.283094] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cda80) on tqpair(0x146d760): expected_datao=0, payload_size=4096 00:23:06.640 [2024-10-01 15:20:16.283099] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.283110] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.283114] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.283306] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.640 [2024-10-01 15:20:16.283312] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.640 [2024-10-01 15:20:16.283316] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.283319] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cda80) on tqpair=0x146d760 00:23:06.640 [2024-10-01 15:20:16.283331] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:06.640 [2024-10-01 15:20:16.283359] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.283364] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x146d760) 00:23:06.640 [2024-10-01 15:20:16.283370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.640 [2024-10-01 15:20:16.283377] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.283381] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.283385] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x146d760) 00:23:06.640 [2024-10-01 15:20:16.283391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.640 [2024-10-01 15:20:16.283404] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cda80, cid 4, qid 0 00:23:06.640 [2024-10-01 15:20:16.283409] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cdc00, cid 5, qid 0 00:23:06.640 [2024-10-01 15:20:16.283632] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:06.640 [2024-10-01 15:20:16.283639] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:06.640 [2024-10-01 15:20:16.283643] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.283647] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x146d760): datao=0, datal=1024, cccid=4 00:23:06.640 [2024-10-01 15:20:16.283651] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cda80) on tqpair(0x146d760): expected_datao=0, payload_size=1024 00:23:06.640 [2024-10-01 15:20:16.283656] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.283662] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.283666] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.283672] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.640 [2024-10-01 15:20:16.283678] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.640 [2024-10-01 15:20:16.283681] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.283685] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cdc00) on tqpair=0x146d760 00:23:06.640 [2024-10-01 15:20:16.325004] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.640 [2024-10-01 15:20:16.325016] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.640 [2024-10-01 15:20:16.325020] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.325024] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cda80) on tqpair=0x146d760 00:23:06.640 [2024-10-01 15:20:16.325039] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.325043] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x146d760) 00:23:06.640 [2024-10-01 15:20:16.325050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.640 [2024-10-01 15:20:16.325066] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cda80, cid 4, qid 0 00:23:06.640 [2024-10-01 15:20:16.325247] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:06.640 [2024-10-01 15:20:16.325254] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:06.640 [2024-10-01 15:20:16.325258] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.325262] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x146d760): datao=0, datal=3072, cccid=4 00:23:06.640 [2024-10-01 15:20:16.325266] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cda80) on tqpair(0x146d760): expected_datao=0, payload_size=3072 00:23:06.640 [2024-10-01 15:20:16.325271] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.325278] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.325284] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.325443] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.640 [2024-10-01 15:20:16.325450] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.640 [2024-10-01 15:20:16.325453] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.325457] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cda80) on tqpair=0x146d760 00:23:06.640 [2024-10-01 15:20:16.325466] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.325469] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x146d760) 00:23:06.640 [2024-10-01 15:20:16.325476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.640 [2024-10-01 15:20:16.325490] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cda80, cid 4, qid 0 00:23:06.640 [2024-10-01 15:20:16.325718] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:06.640 [2024-10-01 15:20:16.325725] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:06.640 [2024-10-01 15:20:16.325728] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.325732] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x146d760): datao=0, datal=8, cccid=4 00:23:06.640 [2024-10-01 15:20:16.325736] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14cda80) on tqpair(0x146d760): expected_datao=0, payload_size=8 00:23:06.640 [2024-10-01 15:20:16.325741] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.325747] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.325751] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.366170] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.640 [2024-10-01 15:20:16.366181] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.640 [2024-10-01 15:20:16.366184] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.640 [2024-10-01 15:20:16.366188] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cda80) on tqpair=0x146d760 00:23:06.640 ===================================================== 00:23:06.640 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:06.640 ===================================================== 00:23:06.640 Controller Capabilities/Features 00:23:06.640 ================================ 00:23:06.640 Vendor ID: 0000 00:23:06.640 Subsystem Vendor ID: 0000 00:23:06.640 Serial Number: .................... 00:23:06.640 Model Number: ........................................ 00:23:06.640 Firmware Version: 25.01 00:23:06.640 Recommended Arb Burst: 0 00:23:06.640 IEEE OUI Identifier: 00 00 00 00:23:06.640 Multi-path I/O 00:23:06.640 May have multiple subsystem ports: No 00:23:06.640 May have multiple controllers: No 00:23:06.640 Associated with SR-IOV VF: No 00:23:06.640 Max Data Transfer Size: 131072 00:23:06.640 Max Number of Namespaces: 0 00:23:06.640 Max Number of I/O Queues: 1024 00:23:06.640 NVMe Specification Version (VS): 1.3 00:23:06.640 NVMe Specification Version (Identify): 1.3 00:23:06.640 Maximum Queue Entries: 128 00:23:06.640 Contiguous Queues Required: Yes 00:23:06.640 Arbitration Mechanisms Supported 00:23:06.640 Weighted Round Robin: Not Supported 00:23:06.640 Vendor Specific: Not Supported 00:23:06.640 Reset Timeout: 15000 ms 00:23:06.640 Doorbell Stride: 4 bytes 00:23:06.640 NVM Subsystem Reset: Not Supported 00:23:06.640 Command Sets Supported 00:23:06.640 NVM Command Set: Supported 00:23:06.640 Boot Partition: Not Supported 00:23:06.641 Memory Page Size Minimum: 4096 bytes 00:23:06.641 Memory Page Size Maximum: 4096 bytes 00:23:06.641 Persistent Memory Region: Not Supported 00:23:06.641 Optional Asynchronous Events Supported 00:23:06.641 Namespace Attribute Notices: Not Supported 00:23:06.641 Firmware Activation Notices: Not Supported 00:23:06.641 ANA Change Notices: Not Supported 00:23:06.641 PLE Aggregate Log Change Notices: Not Supported 00:23:06.641 LBA Status Info Alert Notices: Not Supported 00:23:06.641 EGE Aggregate Log Change Notices: Not Supported 00:23:06.641 Normal NVM Subsystem Shutdown event: Not Supported 00:23:06.641 Zone Descriptor Change Notices: Not Supported 00:23:06.641 Discovery Log Change Notices: Supported 00:23:06.641 Controller Attributes 00:23:06.641 128-bit Host Identifier: Not Supported 00:23:06.641 Non-Operational Permissive Mode: Not Supported 00:23:06.641 NVM Sets: Not Supported 00:23:06.641 Read Recovery Levels: Not Supported 00:23:06.641 Endurance Groups: Not Supported 00:23:06.641 Predictable Latency Mode: Not Supported 00:23:06.641 Traffic Based Keep ALive: Not Supported 00:23:06.641 Namespace Granularity: Not Supported 00:23:06.641 SQ Associations: Not Supported 00:23:06.641 UUID List: Not Supported 00:23:06.641 Multi-Domain Subsystem: Not Supported 00:23:06.641 Fixed Capacity Management: Not Supported 00:23:06.641 Variable Capacity Management: Not Supported 00:23:06.641 Delete Endurance Group: Not Supported 00:23:06.641 Delete NVM Set: Not Supported 00:23:06.641 Extended LBA Formats Supported: Not Supported 00:23:06.641 Flexible Data Placement Supported: Not Supported 00:23:06.641 00:23:06.641 Controller Memory Buffer Support 00:23:06.641 ================================ 00:23:06.641 Supported: No 00:23:06.641 00:23:06.641 Persistent Memory Region Support 00:23:06.641 ================================ 00:23:06.641 Supported: No 00:23:06.641 00:23:06.641 Admin Command Set Attributes 00:23:06.641 ============================ 00:23:06.641 Security Send/Receive: Not Supported 00:23:06.641 Format NVM: Not Supported 00:23:06.641 Firmware Activate/Download: Not Supported 00:23:06.641 Namespace Management: Not Supported 00:23:06.641 Device Self-Test: Not Supported 00:23:06.641 Directives: Not Supported 00:23:06.641 NVMe-MI: Not Supported 00:23:06.641 Virtualization Management: Not Supported 00:23:06.641 Doorbell Buffer Config: Not Supported 00:23:06.641 Get LBA Status Capability: Not Supported 00:23:06.641 Command & Feature Lockdown Capability: Not Supported 00:23:06.641 Abort Command Limit: 1 00:23:06.641 Async Event Request Limit: 4 00:23:06.641 Number of Firmware Slots: N/A 00:23:06.641 Firmware Slot 1 Read-Only: N/A 00:23:06.641 Firmware Activation Without Reset: N/A 00:23:06.641 Multiple Update Detection Support: N/A 00:23:06.641 Firmware Update Granularity: No Information Provided 00:23:06.641 Per-Namespace SMART Log: No 00:23:06.641 Asymmetric Namespace Access Log Page: Not Supported 00:23:06.641 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:06.641 Command Effects Log Page: Not Supported 00:23:06.641 Get Log Page Extended Data: Supported 00:23:06.641 Telemetry Log Pages: Not Supported 00:23:06.641 Persistent Event Log Pages: Not Supported 00:23:06.641 Supported Log Pages Log Page: May Support 00:23:06.641 Commands Supported & Effects Log Page: Not Supported 00:23:06.641 Feature Identifiers & Effects Log Page:May Support 00:23:06.641 NVMe-MI Commands & Effects Log Page: May Support 00:23:06.641 Data Area 4 for Telemetry Log: Not Supported 00:23:06.641 Error Log Page Entries Supported: 128 00:23:06.641 Keep Alive: Not Supported 00:23:06.641 00:23:06.641 NVM Command Set Attributes 00:23:06.641 ========================== 00:23:06.641 Submission Queue Entry Size 00:23:06.641 Max: 1 00:23:06.641 Min: 1 00:23:06.641 Completion Queue Entry Size 00:23:06.641 Max: 1 00:23:06.641 Min: 1 00:23:06.641 Number of Namespaces: 0 00:23:06.641 Compare Command: Not Supported 00:23:06.641 Write Uncorrectable Command: Not Supported 00:23:06.641 Dataset Management Command: Not Supported 00:23:06.641 Write Zeroes Command: Not Supported 00:23:06.641 Set Features Save Field: Not Supported 00:23:06.641 Reservations: Not Supported 00:23:06.641 Timestamp: Not Supported 00:23:06.641 Copy: Not Supported 00:23:06.641 Volatile Write Cache: Not Present 00:23:06.641 Atomic Write Unit (Normal): 1 00:23:06.641 Atomic Write Unit (PFail): 1 00:23:06.641 Atomic Compare & Write Unit: 1 00:23:06.641 Fused Compare & Write: Supported 00:23:06.641 Scatter-Gather List 00:23:06.641 SGL Command Set: Supported 00:23:06.641 SGL Keyed: Supported 00:23:06.641 SGL Bit Bucket Descriptor: Not Supported 00:23:06.641 SGL Metadata Pointer: Not Supported 00:23:06.641 Oversized SGL: Not Supported 00:23:06.641 SGL Metadata Address: Not Supported 00:23:06.641 SGL Offset: Supported 00:23:06.641 Transport SGL Data Block: Not Supported 00:23:06.641 Replay Protected Memory Block: Not Supported 00:23:06.641 00:23:06.641 Firmware Slot Information 00:23:06.641 ========================= 00:23:06.641 Active slot: 0 00:23:06.641 00:23:06.641 00:23:06.641 Error Log 00:23:06.641 ========= 00:23:06.641 00:23:06.641 Active Namespaces 00:23:06.641 ================= 00:23:06.641 Discovery Log Page 00:23:06.641 ================== 00:23:06.641 Generation Counter: 2 00:23:06.641 Number of Records: 2 00:23:06.641 Record Format: 0 00:23:06.641 00:23:06.641 Discovery Log Entry 0 00:23:06.641 ---------------------- 00:23:06.641 Transport Type: 3 (TCP) 00:23:06.641 Address Family: 1 (IPv4) 00:23:06.641 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:06.641 Entry Flags: 00:23:06.641 Duplicate Returned Information: 1 00:23:06.641 Explicit Persistent Connection Support for Discovery: 1 00:23:06.641 Transport Requirements: 00:23:06.641 Secure Channel: Not Required 00:23:06.641 Port ID: 0 (0x0000) 00:23:06.641 Controller ID: 65535 (0xffff) 00:23:06.641 Admin Max SQ Size: 128 00:23:06.641 Transport Service Identifier: 4420 00:23:06.641 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:06.641 Transport Address: 10.0.0.2 00:23:06.641 Discovery Log Entry 1 00:23:06.641 ---------------------- 00:23:06.641 Transport Type: 3 (TCP) 00:23:06.641 Address Family: 1 (IPv4) 00:23:06.641 Subsystem Type: 2 (NVM Subsystem) 00:23:06.641 Entry Flags: 00:23:06.641 Duplicate Returned Information: 0 00:23:06.641 Explicit Persistent Connection Support for Discovery: 0 00:23:06.641 Transport Requirements: 00:23:06.641 Secure Channel: Not Required 00:23:06.641 Port ID: 0 (0x0000) 00:23:06.641 Controller ID: 65535 (0xffff) 00:23:06.641 Admin Max SQ Size: 128 00:23:06.641 Transport Service Identifier: 4420 00:23:06.641 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:06.641 Transport Address: 10.0.0.2 [2024-10-01 15:20:16.366271] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:06.641 [2024-10-01 15:20:16.366282] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd480) on tqpair=0x146d760 00:23:06.641 [2024-10-01 15:20:16.366288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.641 [2024-10-01 15:20:16.366294] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd600) on tqpair=0x146d760 00:23:06.641 [2024-10-01 15:20:16.366299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.641 [2024-10-01 15:20:16.366304] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd780) on tqpair=0x146d760 00:23:06.641 [2024-10-01 15:20:16.366308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.641 [2024-10-01 15:20:16.366313] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.641 [2024-10-01 15:20:16.366318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.641 [2024-10-01 15:20:16.366327] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.641 [2024-10-01 15:20:16.366331] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.641 [2024-10-01 15:20:16.366335] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.642 [2024-10-01 15:20:16.366342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.642 [2024-10-01 15:20:16.366357] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.642 [2024-10-01 15:20:16.366451] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.642 [2024-10-01 15:20:16.366458] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.642 [2024-10-01 15:20:16.366461] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.366465] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.642 [2024-10-01 15:20:16.366472] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.366476] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.366479] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.642 [2024-10-01 15:20:16.366486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.642 [2024-10-01 15:20:16.366500] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.642 [2024-10-01 15:20:16.366678] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.642 [2024-10-01 15:20:16.366685] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.642 [2024-10-01 15:20:16.366688] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.366692] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.642 [2024-10-01 15:20:16.366697] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:06.642 [2024-10-01 15:20:16.366704] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:06.642 [2024-10-01 15:20:16.366713] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.366717] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.366721] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.642 [2024-10-01 15:20:16.366728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.642 [2024-10-01 15:20:16.366738] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.642 [2024-10-01 15:20:16.366924] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.642 [2024-10-01 15:20:16.366930] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.642 [2024-10-01 15:20:16.366934] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.366938] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.642 [2024-10-01 15:20:16.366948] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.366952] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.366956] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.642 [2024-10-01 15:20:16.366962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.642 [2024-10-01 15:20:16.366972] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.642 [2024-10-01 15:20:16.367173] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.642 [2024-10-01 15:20:16.367180] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.642 [2024-10-01 15:20:16.367184] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.367188] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.642 [2024-10-01 15:20:16.367198] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.367202] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.367205] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.642 [2024-10-01 15:20:16.367215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.642 [2024-10-01 15:20:16.367226] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.642 [2024-10-01 15:20:16.367394] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.642 [2024-10-01 15:20:16.367401] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.642 [2024-10-01 15:20:16.367405] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.367409] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.642 [2024-10-01 15:20:16.367418] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.367422] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.367426] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.642 [2024-10-01 15:20:16.367432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.642 [2024-10-01 15:20:16.367442] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.642 [2024-10-01 15:20:16.367642] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.642 [2024-10-01 15:20:16.367648] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.642 [2024-10-01 15:20:16.367652] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.367656] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.642 [2024-10-01 15:20:16.367666] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.367670] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.367673] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.642 [2024-10-01 15:20:16.367680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.642 [2024-10-01 15:20:16.367690] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.642 [2024-10-01 15:20:16.367871] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.642 [2024-10-01 15:20:16.367877] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.642 [2024-10-01 15:20:16.367880] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.367884] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.642 [2024-10-01 15:20:16.367894] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.367898] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.367901] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.642 [2024-10-01 15:20:16.367908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.642 [2024-10-01 15:20:16.367918] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.642 [2024-10-01 15:20:16.368119] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.642 [2024-10-01 15:20:16.368126] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.642 [2024-10-01 15:20:16.368129] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.368133] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.642 [2024-10-01 15:20:16.368143] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.368147] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.368150] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.642 [2024-10-01 15:20:16.368157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.642 [2024-10-01 15:20:16.368171] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.642 [2024-10-01 15:20:16.368352] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.642 [2024-10-01 15:20:16.368359] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.642 [2024-10-01 15:20:16.368362] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.368366] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.642 [2024-10-01 15:20:16.368376] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.368380] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.368383] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.642 [2024-10-01 15:20:16.368390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.642 [2024-10-01 15:20:16.368400] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.642 [2024-10-01 15:20:16.368563] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.642 [2024-10-01 15:20:16.368570] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.642 [2024-10-01 15:20:16.368573] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.368577] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.642 [2024-10-01 15:20:16.368587] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.368591] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.368594] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.642 [2024-10-01 15:20:16.368601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.642 [2024-10-01 15:20:16.368611] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.642 [2024-10-01 15:20:16.368785] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.642 [2024-10-01 15:20:16.368792] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.642 [2024-10-01 15:20:16.368795] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.368799] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.642 [2024-10-01 15:20:16.368808] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.368812] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.368816] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.642 [2024-10-01 15:20:16.368823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.642 [2024-10-01 15:20:16.368833] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.642 [2024-10-01 15:20:16.373005] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.642 [2024-10-01 15:20:16.373013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.642 [2024-10-01 15:20:16.373017] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.642 [2024-10-01 15:20:16.373021] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.643 [2024-10-01 15:20:16.373030] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.373034] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.373038] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x146d760) 00:23:06.643 [2024-10-01 15:20:16.373045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.643 [2024-10-01 15:20:16.373058] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14cd900, cid 3, qid 0 00:23:06.643 [2024-10-01 15:20:16.373231] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.643 [2024-10-01 15:20:16.373238] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.643 [2024-10-01 15:20:16.373241] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.373245] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14cd900) on tqpair=0x146d760 00:23:06.643 [2024-10-01 15:20:16.373253] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:06.643 00:23:06.643 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:06.643 [2024-10-01 15:20:16.414790] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:23:06.643 [2024-10-01 15:20:16.414837] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4053476 ] 00:23:06.643 [2024-10-01 15:20:16.447559] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:06.643 [2024-10-01 15:20:16.447602] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:06.643 [2024-10-01 15:20:16.447607] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:06.643 [2024-10-01 15:20:16.447618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:06.643 [2024-10-01 15:20:16.447627] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:06.643 [2024-10-01 15:20:16.451203] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:06.643 [2024-10-01 15:20:16.451233] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x95d760 0 00:23:06.643 [2024-10-01 15:20:16.451413] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:06.643 [2024-10-01 15:20:16.451421] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:06.643 [2024-10-01 15:20:16.451425] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:06.643 [2024-10-01 15:20:16.451428] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:06.643 [2024-10-01 15:20:16.451451] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.451457] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.451461] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95d760) 00:23:06.643 [2024-10-01 15:20:16.451472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:06.643 [2024-10-01 15:20:16.451486] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd480, cid 0, qid 0 00:23:06.643 [2024-10-01 15:20:16.459005] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.643 [2024-10-01 15:20:16.459014] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.643 [2024-10-01 15:20:16.459018] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.459022] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd480) on tqpair=0x95d760 00:23:06.643 [2024-10-01 15:20:16.459034] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:06.643 [2024-10-01 15:20:16.459040] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:06.643 [2024-10-01 15:20:16.459049] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:06.643 [2024-10-01 15:20:16.459061] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.459065] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.459069] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95d760) 00:23:06.643 [2024-10-01 15:20:16.459076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.643 [2024-10-01 15:20:16.459089] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd480, cid 0, qid 0 00:23:06.643 [2024-10-01 15:20:16.459239] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.643 [2024-10-01 15:20:16.459246] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.643 [2024-10-01 15:20:16.459249] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.459253] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd480) on tqpair=0x95d760 00:23:06.643 [2024-10-01 15:20:16.459258] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:06.643 [2024-10-01 15:20:16.459266] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:06.643 [2024-10-01 15:20:16.459273] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.459277] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.459281] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95d760) 00:23:06.643 [2024-10-01 15:20:16.459287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.643 [2024-10-01 15:20:16.459298] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd480, cid 0, qid 0 00:23:06.643 [2024-10-01 15:20:16.459458] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.643 [2024-10-01 15:20:16.459465] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.643 [2024-10-01 15:20:16.459468] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.459472] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd480) on tqpair=0x95d760 00:23:06.643 [2024-10-01 15:20:16.459477] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:06.643 [2024-10-01 15:20:16.459485] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:06.643 [2024-10-01 15:20:16.459491] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.459495] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.459499] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95d760) 00:23:06.643 [2024-10-01 15:20:16.459506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.643 [2024-10-01 15:20:16.459516] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd480, cid 0, qid 0 00:23:06.643 [2024-10-01 15:20:16.459574] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.643 [2024-10-01 15:20:16.459581] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.643 [2024-10-01 15:20:16.459584] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.459588] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd480) on tqpair=0x95d760 00:23:06.643 [2024-10-01 15:20:16.459593] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:06.643 [2024-10-01 15:20:16.459603] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.459607] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.643 [2024-10-01 15:20:16.459612] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95d760) 00:23:06.643 [2024-10-01 15:20:16.459619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.643 [2024-10-01 15:20:16.459630] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd480, cid 0, qid 0 00:23:06.643 [2024-10-01 15:20:16.459689] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.643 [2024-10-01 15:20:16.459695] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.644 [2024-10-01 15:20:16.459698] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.459702] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd480) on tqpair=0x95d760 00:23:06.644 [2024-10-01 15:20:16.459707] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:06.644 [2024-10-01 15:20:16.459712] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:06.644 [2024-10-01 15:20:16.459719] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:06.644 [2024-10-01 15:20:16.459825] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:06.644 [2024-10-01 15:20:16.459829] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:06.644 [2024-10-01 15:20:16.459836] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.459840] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.459844] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95d760) 00:23:06.644 [2024-10-01 15:20:16.459850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.644 [2024-10-01 15:20:16.459861] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd480, cid 0, qid 0 00:23:06.644 [2024-10-01 15:20:16.459913] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.644 [2024-10-01 15:20:16.459919] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.644 [2024-10-01 15:20:16.459923] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.459927] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd480) on tqpair=0x95d760 00:23:06.644 [2024-10-01 15:20:16.459931] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:06.644 [2024-10-01 15:20:16.459941] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.459945] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.459948] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95d760) 00:23:06.644 [2024-10-01 15:20:16.459955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.644 [2024-10-01 15:20:16.459965] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd480, cid 0, qid 0 00:23:06.644 [2024-10-01 15:20:16.460030] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.644 [2024-10-01 15:20:16.460037] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.644 [2024-10-01 15:20:16.460040] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460044] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd480) on tqpair=0x95d760 00:23:06.644 [2024-10-01 15:20:16.460048] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:06.644 [2024-10-01 15:20:16.460053] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:06.644 [2024-10-01 15:20:16.460063] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:06.644 [2024-10-01 15:20:16.460070] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:06.644 [2024-10-01 15:20:16.460078] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460082] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95d760) 00:23:06.644 [2024-10-01 15:20:16.460089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.644 [2024-10-01 15:20:16.460100] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd480, cid 0, qid 0 00:23:06.644 [2024-10-01 15:20:16.460293] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:06.644 [2024-10-01 15:20:16.460300] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:06.644 [2024-10-01 15:20:16.460304] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460307] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95d760): datao=0, datal=4096, cccid=0 00:23:06.644 [2024-10-01 15:20:16.460312] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bd480) on tqpair(0x95d760): expected_datao=0, payload_size=4096 00:23:06.644 [2024-10-01 15:20:16.460317] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460324] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460328] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460501] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.644 [2024-10-01 15:20:16.460508] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.644 [2024-10-01 15:20:16.460511] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460515] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd480) on tqpair=0x95d760 00:23:06.644 [2024-10-01 15:20:16.460522] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:06.644 [2024-10-01 15:20:16.460527] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:06.644 [2024-10-01 15:20:16.460531] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:06.644 [2024-10-01 15:20:16.460535] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:06.644 [2024-10-01 15:20:16.460540] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:06.644 [2024-10-01 15:20:16.460544] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:06.644 [2024-10-01 15:20:16.460553] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:06.644 [2024-10-01 15:20:16.460560] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460564] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95d760) 00:23:06.644 [2024-10-01 15:20:16.460574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:06.644 [2024-10-01 15:20:16.460585] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd480, cid 0, qid 0 00:23:06.644 [2024-10-01 15:20:16.460644] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.644 [2024-10-01 15:20:16.460650] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.644 [2024-10-01 15:20:16.460654] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460658] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd480) on tqpair=0x95d760 00:23:06.644 [2024-10-01 15:20:16.460667] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460670] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460674] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95d760) 00:23:06.644 [2024-10-01 15:20:16.460680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.644 [2024-10-01 15:20:16.460686] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460690] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460694] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x95d760) 00:23:06.644 [2024-10-01 15:20:16.460700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.644 [2024-10-01 15:20:16.460706] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460710] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460713] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x95d760) 00:23:06.644 [2024-10-01 15:20:16.460719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.644 [2024-10-01 15:20:16.460725] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460729] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460732] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95d760) 00:23:06.644 [2024-10-01 15:20:16.460738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.644 [2024-10-01 15:20:16.460743] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:06.644 [2024-10-01 15:20:16.460754] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:06.644 [2024-10-01 15:20:16.460760] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460764] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95d760) 00:23:06.644 [2024-10-01 15:20:16.460771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.644 [2024-10-01 15:20:16.460783] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd480, cid 0, qid 0 00:23:06.644 [2024-10-01 15:20:16.460788] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd600, cid 1, qid 0 00:23:06.644 [2024-10-01 15:20:16.460793] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd780, cid 2, qid 0 00:23:06.644 [2024-10-01 15:20:16.460798] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd900, cid 3, qid 0 00:23:06.644 [2024-10-01 15:20:16.460802] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bda80, cid 4, qid 0 00:23:06.644 [2024-10-01 15:20:16.460893] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.644 [2024-10-01 15:20:16.460900] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.644 [2024-10-01 15:20:16.460903] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460907] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bda80) on tqpair=0x95d760 00:23:06.644 [2024-10-01 15:20:16.460912] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:06.644 [2024-10-01 15:20:16.460917] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:06.644 [2024-10-01 15:20:16.460925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:06.644 [2024-10-01 15:20:16.460935] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:06.644 [2024-10-01 15:20:16.460941] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460945] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.644 [2024-10-01 15:20:16.460949] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95d760) 00:23:06.645 [2024-10-01 15:20:16.460956] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:06.645 [2024-10-01 15:20:16.460966] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bda80, cid 4, qid 0 00:23:06.645 [2024-10-01 15:20:16.461021] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.645 [2024-10-01 15:20:16.461028] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.645 [2024-10-01 15:20:16.461031] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461035] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bda80) on tqpair=0x95d760 00:23:06.645 [2024-10-01 15:20:16.461099] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:06.645 [2024-10-01 15:20:16.461109] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:06.645 [2024-10-01 15:20:16.461117] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461121] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95d760) 00:23:06.645 [2024-10-01 15:20:16.461127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.645 [2024-10-01 15:20:16.461138] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bda80, cid 4, qid 0 00:23:06.645 [2024-10-01 15:20:16.461351] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:06.645 [2024-10-01 15:20:16.461358] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:06.645 [2024-10-01 15:20:16.461361] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461365] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95d760): datao=0, datal=4096, cccid=4 00:23:06.645 [2024-10-01 15:20:16.461370] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bda80) on tqpair(0x95d760): expected_datao=0, payload_size=4096 00:23:06.645 [2024-10-01 15:20:16.461374] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461381] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461385] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461506] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.645 [2024-10-01 15:20:16.461512] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.645 [2024-10-01 15:20:16.461515] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461519] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bda80) on tqpair=0x95d760 00:23:06.645 [2024-10-01 15:20:16.461528] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:06.645 [2024-10-01 15:20:16.461541] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:06.645 [2024-10-01 15:20:16.461550] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:06.645 [2024-10-01 15:20:16.461558] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461561] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95d760) 00:23:06.645 [2024-10-01 15:20:16.461572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.645 [2024-10-01 15:20:16.461583] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bda80, cid 4, qid 0 00:23:06.645 [2024-10-01 15:20:16.461746] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:06.645 [2024-10-01 15:20:16.461752] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:06.645 [2024-10-01 15:20:16.461756] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461760] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95d760): datao=0, datal=4096, cccid=4 00:23:06.645 [2024-10-01 15:20:16.461764] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bda80) on tqpair(0x95d760): expected_datao=0, payload_size=4096 00:23:06.645 [2024-10-01 15:20:16.461768] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461775] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461779] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461935] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.645 [2024-10-01 15:20:16.461941] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.645 [2024-10-01 15:20:16.461944] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461948] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bda80) on tqpair=0x95d760 00:23:06.645 [2024-10-01 15:20:16.461960] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:06.645 [2024-10-01 15:20:16.461970] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:06.645 [2024-10-01 15:20:16.461977] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.461981] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95d760) 00:23:06.645 [2024-10-01 15:20:16.461987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.645 [2024-10-01 15:20:16.462003] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bda80, cid 4, qid 0 00:23:06.645 [2024-10-01 15:20:16.462166] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:06.645 [2024-10-01 15:20:16.462173] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:06.645 [2024-10-01 15:20:16.462176] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.462180] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95d760): datao=0, datal=4096, cccid=4 00:23:06.645 [2024-10-01 15:20:16.462185] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bda80) on tqpair(0x95d760): expected_datao=0, payload_size=4096 00:23:06.645 [2024-10-01 15:20:16.462189] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.462196] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.462199] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.462317] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.645 [2024-10-01 15:20:16.462323] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.645 [2024-10-01 15:20:16.462327] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.462331] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bda80) on tqpair=0x95d760 00:23:06.645 [2024-10-01 15:20:16.462338] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:06.645 [2024-10-01 15:20:16.462346] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:06.645 [2024-10-01 15:20:16.462357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:06.645 [2024-10-01 15:20:16.462363] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:06.645 [2024-10-01 15:20:16.462368] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:06.645 [2024-10-01 15:20:16.462373] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:06.645 [2024-10-01 15:20:16.462378] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:06.645 [2024-10-01 15:20:16.462383] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:06.645 [2024-10-01 15:20:16.462388] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:06.645 [2024-10-01 15:20:16.462401] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.462406] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95d760) 00:23:06.645 [2024-10-01 15:20:16.462412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.645 [2024-10-01 15:20:16.462419] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.462423] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.462426] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x95d760) 00:23:06.645 [2024-10-01 15:20:16.462433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.645 [2024-10-01 15:20:16.462444] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bda80, cid 4, qid 0 00:23:06.645 [2024-10-01 15:20:16.462449] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bdc00, cid 5, qid 0 00:23:06.645 [2024-10-01 15:20:16.462656] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.645 [2024-10-01 15:20:16.462662] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.645 [2024-10-01 15:20:16.462666] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.462670] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bda80) on tqpair=0x95d760 00:23:06.645 [2024-10-01 15:20:16.462676] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.645 [2024-10-01 15:20:16.462682] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.645 [2024-10-01 15:20:16.462686] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.462690] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bdc00) on tqpair=0x95d760 00:23:06.645 [2024-10-01 15:20:16.462698] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.645 [2024-10-01 15:20:16.462702] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x95d760) 00:23:06.645 [2024-10-01 15:20:16.462709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.645 [2024-10-01 15:20:16.462719] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bdc00, cid 5, qid 0 00:23:06.645 [2024-10-01 15:20:16.462880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.646 [2024-10-01 15:20:16.462887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.646 [2024-10-01 15:20:16.462890] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.462894] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bdc00) on tqpair=0x95d760 00:23:06.646 [2024-10-01 15:20:16.462903] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.462907] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x95d760) 00:23:06.646 [2024-10-01 15:20:16.462915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.646 [2024-10-01 15:20:16.462926] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bdc00, cid 5, qid 0 00:23:06.646 [2024-10-01 15:20:16.467005] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.646 [2024-10-01 15:20:16.467013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.646 [2024-10-01 15:20:16.467017] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467021] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bdc00) on tqpair=0x95d760 00:23:06.646 [2024-10-01 15:20:16.467030] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467034] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x95d760) 00:23:06.646 [2024-10-01 15:20:16.467041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.646 [2024-10-01 15:20:16.467052] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bdc00, cid 5, qid 0 00:23:06.646 [2024-10-01 15:20:16.467195] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.646 [2024-10-01 15:20:16.467202] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.646 [2024-10-01 15:20:16.467205] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467209] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bdc00) on tqpair=0x95d760 00:23:06.646 [2024-10-01 15:20:16.467223] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467227] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x95d760) 00:23:06.646 [2024-10-01 15:20:16.467234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.646 [2024-10-01 15:20:16.467241] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467245] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95d760) 00:23:06.646 [2024-10-01 15:20:16.467251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.646 [2024-10-01 15:20:16.467259] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467262] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x95d760) 00:23:06.646 [2024-10-01 15:20:16.467269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.646 [2024-10-01 15:20:16.467278] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467282] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x95d760) 00:23:06.646 [2024-10-01 15:20:16.467288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.646 [2024-10-01 15:20:16.467300] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bdc00, cid 5, qid 0 00:23:06.646 [2024-10-01 15:20:16.467305] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bda80, cid 4, qid 0 00:23:06.646 [2024-10-01 15:20:16.467310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bdd80, cid 6, qid 0 00:23:06.646 [2024-10-01 15:20:16.467314] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bdf00, cid 7, qid 0 00:23:06.646 [2024-10-01 15:20:16.467523] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:06.646 [2024-10-01 15:20:16.467530] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:06.646 [2024-10-01 15:20:16.467534] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467540] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95d760): datao=0, datal=8192, cccid=5 00:23:06.646 [2024-10-01 15:20:16.467544] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bdc00) on tqpair(0x95d760): expected_datao=0, payload_size=8192 00:23:06.646 [2024-10-01 15:20:16.467549] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467611] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467616] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467622] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:06.646 [2024-10-01 15:20:16.467627] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:06.646 [2024-10-01 15:20:16.467631] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467635] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95d760): datao=0, datal=512, cccid=4 00:23:06.646 [2024-10-01 15:20:16.467639] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bda80) on tqpair(0x95d760): expected_datao=0, payload_size=512 00:23:06.646 [2024-10-01 15:20:16.467644] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467650] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467654] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467659] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:06.646 [2024-10-01 15:20:16.467665] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:06.646 [2024-10-01 15:20:16.467669] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467672] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95d760): datao=0, datal=512, cccid=6 00:23:06.646 [2024-10-01 15:20:16.467677] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bdd80) on tqpair(0x95d760): expected_datao=0, payload_size=512 00:23:06.646 [2024-10-01 15:20:16.467681] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467687] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467691] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467697] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:06.646 [2024-10-01 15:20:16.467702] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:06.646 [2024-10-01 15:20:16.467706] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467709] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95d760): datao=0, datal=4096, cccid=7 00:23:06.646 [2024-10-01 15:20:16.467714] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bdf00) on tqpair(0x95d760): expected_datao=0, payload_size=4096 00:23:06.646 [2024-10-01 15:20:16.467718] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467730] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467733] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467741] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.646 [2024-10-01 15:20:16.467747] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.646 [2024-10-01 15:20:16.467750] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467754] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bdc00) on tqpair=0x95d760 00:23:06.646 [2024-10-01 15:20:16.467766] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.646 [2024-10-01 15:20:16.467772] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.646 [2024-10-01 15:20:16.467776] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467779] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bda80) on tqpair=0x95d760 00:23:06.646 [2024-10-01 15:20:16.467789] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.646 [2024-10-01 15:20:16.467797] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.646 [2024-10-01 15:20:16.467800] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467804] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bdd80) on tqpair=0x95d760 00:23:06.646 [2024-10-01 15:20:16.467811] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.646 [2024-10-01 15:20:16.467817] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.646 [2024-10-01 15:20:16.467820] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.646 [2024-10-01 15:20:16.467824] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bdf00) on tqpair=0x95d760 00:23:06.646 ===================================================== 00:23:06.646 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.646 ===================================================== 00:23:06.646 Controller Capabilities/Features 00:23:06.646 ================================ 00:23:06.646 Vendor ID: 8086 00:23:06.646 Subsystem Vendor ID: 8086 00:23:06.646 Serial Number: SPDK00000000000001 00:23:06.646 Model Number: SPDK bdev Controller 00:23:06.646 Firmware Version: 25.01 00:23:06.646 Recommended Arb Burst: 6 00:23:06.646 IEEE OUI Identifier: e4 d2 5c 00:23:06.646 Multi-path I/O 00:23:06.646 May have multiple subsystem ports: Yes 00:23:06.646 May have multiple controllers: Yes 00:23:06.646 Associated with SR-IOV VF: No 00:23:06.646 Max Data Transfer Size: 131072 00:23:06.646 Max Number of Namespaces: 32 00:23:06.646 Max Number of I/O Queues: 127 00:23:06.646 NVMe Specification Version (VS): 1.3 00:23:06.646 NVMe Specification Version (Identify): 1.3 00:23:06.646 Maximum Queue Entries: 128 00:23:06.646 Contiguous Queues Required: Yes 00:23:06.646 Arbitration Mechanisms Supported 00:23:06.646 Weighted Round Robin: Not Supported 00:23:06.646 Vendor Specific: Not Supported 00:23:06.646 Reset Timeout: 15000 ms 00:23:06.646 Doorbell Stride: 4 bytes 00:23:06.646 NVM Subsystem Reset: Not Supported 00:23:06.646 Command Sets Supported 00:23:06.646 NVM Command Set: Supported 00:23:06.646 Boot Partition: Not Supported 00:23:06.646 Memory Page Size Minimum: 4096 bytes 00:23:06.646 Memory Page Size Maximum: 4096 bytes 00:23:06.646 Persistent Memory Region: Not Supported 00:23:06.646 Optional Asynchronous Events Supported 00:23:06.646 Namespace Attribute Notices: Supported 00:23:06.646 Firmware Activation Notices: Not Supported 00:23:06.646 ANA Change Notices: Not Supported 00:23:06.646 PLE Aggregate Log Change Notices: Not Supported 00:23:06.646 LBA Status Info Alert Notices: Not Supported 00:23:06.647 EGE Aggregate Log Change Notices: Not Supported 00:23:06.647 Normal NVM Subsystem Shutdown event: Not Supported 00:23:06.647 Zone Descriptor Change Notices: Not Supported 00:23:06.647 Discovery Log Change Notices: Not Supported 00:23:06.647 Controller Attributes 00:23:06.647 128-bit Host Identifier: Supported 00:23:06.647 Non-Operational Permissive Mode: Not Supported 00:23:06.647 NVM Sets: Not Supported 00:23:06.647 Read Recovery Levels: Not Supported 00:23:06.647 Endurance Groups: Not Supported 00:23:06.647 Predictable Latency Mode: Not Supported 00:23:06.647 Traffic Based Keep ALive: Not Supported 00:23:06.647 Namespace Granularity: Not Supported 00:23:06.647 SQ Associations: Not Supported 00:23:06.647 UUID List: Not Supported 00:23:06.647 Multi-Domain Subsystem: Not Supported 00:23:06.647 Fixed Capacity Management: Not Supported 00:23:06.647 Variable Capacity Management: Not Supported 00:23:06.647 Delete Endurance Group: Not Supported 00:23:06.647 Delete NVM Set: Not Supported 00:23:06.647 Extended LBA Formats Supported: Not Supported 00:23:06.647 Flexible Data Placement Supported: Not Supported 00:23:06.647 00:23:06.647 Controller Memory Buffer Support 00:23:06.647 ================================ 00:23:06.647 Supported: No 00:23:06.647 00:23:06.647 Persistent Memory Region Support 00:23:06.647 ================================ 00:23:06.647 Supported: No 00:23:06.647 00:23:06.647 Admin Command Set Attributes 00:23:06.647 ============================ 00:23:06.647 Security Send/Receive: Not Supported 00:23:06.647 Format NVM: Not Supported 00:23:06.647 Firmware Activate/Download: Not Supported 00:23:06.647 Namespace Management: Not Supported 00:23:06.647 Device Self-Test: Not Supported 00:23:06.647 Directives: Not Supported 00:23:06.647 NVMe-MI: Not Supported 00:23:06.647 Virtualization Management: Not Supported 00:23:06.647 Doorbell Buffer Config: Not Supported 00:23:06.647 Get LBA Status Capability: Not Supported 00:23:06.647 Command & Feature Lockdown Capability: Not Supported 00:23:06.647 Abort Command Limit: 4 00:23:06.647 Async Event Request Limit: 4 00:23:06.647 Number of Firmware Slots: N/A 00:23:06.647 Firmware Slot 1 Read-Only: N/A 00:23:06.647 Firmware Activation Without Reset: N/A 00:23:06.647 Multiple Update Detection Support: N/A 00:23:06.647 Firmware Update Granularity: No Information Provided 00:23:06.647 Per-Namespace SMART Log: No 00:23:06.647 Asymmetric Namespace Access Log Page: Not Supported 00:23:06.647 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:06.647 Command Effects Log Page: Supported 00:23:06.647 Get Log Page Extended Data: Supported 00:23:06.647 Telemetry Log Pages: Not Supported 00:23:06.647 Persistent Event Log Pages: Not Supported 00:23:06.647 Supported Log Pages Log Page: May Support 00:23:06.647 Commands Supported & Effects Log Page: Not Supported 00:23:06.647 Feature Identifiers & Effects Log Page:May Support 00:23:06.647 NVMe-MI Commands & Effects Log Page: May Support 00:23:06.647 Data Area 4 for Telemetry Log: Not Supported 00:23:06.647 Error Log Page Entries Supported: 128 00:23:06.647 Keep Alive: Supported 00:23:06.647 Keep Alive Granularity: 10000 ms 00:23:06.647 00:23:06.647 NVM Command Set Attributes 00:23:06.647 ========================== 00:23:06.647 Submission Queue Entry Size 00:23:06.647 Max: 64 00:23:06.647 Min: 64 00:23:06.647 Completion Queue Entry Size 00:23:06.647 Max: 16 00:23:06.647 Min: 16 00:23:06.647 Number of Namespaces: 32 00:23:06.647 Compare Command: Supported 00:23:06.647 Write Uncorrectable Command: Not Supported 00:23:06.647 Dataset Management Command: Supported 00:23:06.647 Write Zeroes Command: Supported 00:23:06.647 Set Features Save Field: Not Supported 00:23:06.647 Reservations: Supported 00:23:06.647 Timestamp: Not Supported 00:23:06.647 Copy: Supported 00:23:06.647 Volatile Write Cache: Present 00:23:06.647 Atomic Write Unit (Normal): 1 00:23:06.647 Atomic Write Unit (PFail): 1 00:23:06.647 Atomic Compare & Write Unit: 1 00:23:06.647 Fused Compare & Write: Supported 00:23:06.647 Scatter-Gather List 00:23:06.647 SGL Command Set: Supported 00:23:06.647 SGL Keyed: Supported 00:23:06.647 SGL Bit Bucket Descriptor: Not Supported 00:23:06.647 SGL Metadata Pointer: Not Supported 00:23:06.647 Oversized SGL: Not Supported 00:23:06.647 SGL Metadata Address: Not Supported 00:23:06.647 SGL Offset: Supported 00:23:06.647 Transport SGL Data Block: Not Supported 00:23:06.647 Replay Protected Memory Block: Not Supported 00:23:06.647 00:23:06.647 Firmware Slot Information 00:23:06.647 ========================= 00:23:06.647 Active slot: 1 00:23:06.647 Slot 1 Firmware Revision: 25.01 00:23:06.647 00:23:06.647 00:23:06.647 Commands Supported and Effects 00:23:06.647 ============================== 00:23:06.647 Admin Commands 00:23:06.647 -------------- 00:23:06.647 Get Log Page (02h): Supported 00:23:06.647 Identify (06h): Supported 00:23:06.647 Abort (08h): Supported 00:23:06.647 Set Features (09h): Supported 00:23:06.647 Get Features (0Ah): Supported 00:23:06.647 Asynchronous Event Request (0Ch): Supported 00:23:06.647 Keep Alive (18h): Supported 00:23:06.647 I/O Commands 00:23:06.647 ------------ 00:23:06.647 Flush (00h): Supported LBA-Change 00:23:06.647 Write (01h): Supported LBA-Change 00:23:06.647 Read (02h): Supported 00:23:06.647 Compare (05h): Supported 00:23:06.647 Write Zeroes (08h): Supported LBA-Change 00:23:06.647 Dataset Management (09h): Supported LBA-Change 00:23:06.647 Copy (19h): Supported LBA-Change 00:23:06.647 00:23:06.647 Error Log 00:23:06.647 ========= 00:23:06.647 00:23:06.647 Arbitration 00:23:06.647 =========== 00:23:06.647 Arbitration Burst: 1 00:23:06.647 00:23:06.647 Power Management 00:23:06.647 ================ 00:23:06.647 Number of Power States: 1 00:23:06.647 Current Power State: Power State #0 00:23:06.647 Power State #0: 00:23:06.647 Max Power: 0.00 W 00:23:06.647 Non-Operational State: Operational 00:23:06.647 Entry Latency: Not Reported 00:23:06.647 Exit Latency: Not Reported 00:23:06.647 Relative Read Throughput: 0 00:23:06.647 Relative Read Latency: 0 00:23:06.647 Relative Write Throughput: 0 00:23:06.647 Relative Write Latency: 0 00:23:06.647 Idle Power: Not Reported 00:23:06.647 Active Power: Not Reported 00:23:06.647 Non-Operational Permissive Mode: Not Supported 00:23:06.647 00:23:06.647 Health Information 00:23:06.647 ================== 00:23:06.647 Critical Warnings: 00:23:06.647 Available Spare Space: OK 00:23:06.647 Temperature: OK 00:23:06.647 Device Reliability: OK 00:23:06.647 Read Only: No 00:23:06.647 Volatile Memory Backup: OK 00:23:06.647 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:06.647 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:06.647 Available Spare: 0% 00:23:06.647 Available Spare Threshold: 0% 00:23:06.647 Life Percentage Used:[2024-10-01 15:20:16.467923] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.647 [2024-10-01 15:20:16.467929] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x95d760) 00:23:06.647 [2024-10-01 15:20:16.467935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.647 [2024-10-01 15:20:16.467947] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bdf00, cid 7, qid 0 00:23:06.647 [2024-10-01 15:20:16.468467] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.647 [2024-10-01 15:20:16.468474] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.647 [2024-10-01 15:20:16.468477] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.647 [2024-10-01 15:20:16.468481] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bdf00) on tqpair=0x95d760 00:23:06.647 [2024-10-01 15:20:16.468510] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:06.647 [2024-10-01 15:20:16.468520] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd480) on tqpair=0x95d760 00:23:06.647 [2024-10-01 15:20:16.468526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.647 [2024-10-01 15:20:16.468531] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd600) on tqpair=0x95d760 00:23:06.647 [2024-10-01 15:20:16.468536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.647 [2024-10-01 15:20:16.468541] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd780) on tqpair=0x95d760 00:23:06.648 [2024-10-01 15:20:16.468546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.648 [2024-10-01 15:20:16.468551] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd900) on tqpair=0x95d760 00:23:06.648 [2024-10-01 15:20:16.468556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.648 [2024-10-01 15:20:16.468564] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.468568] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.468571] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95d760) 00:23:06.648 [2024-10-01 15:20:16.468578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.648 [2024-10-01 15:20:16.468590] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd900, cid 3, qid 0 00:23:06.648 [2024-10-01 15:20:16.468732] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.648 [2024-10-01 15:20:16.468738] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.648 [2024-10-01 15:20:16.468742] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.468745] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd900) on tqpair=0x95d760 00:23:06.648 [2024-10-01 15:20:16.468752] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.468756] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.468763] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95d760) 00:23:06.648 [2024-10-01 15:20:16.468770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.648 [2024-10-01 15:20:16.468783] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd900, cid 3, qid 0 00:23:06.648 [2024-10-01 15:20:16.468956] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.648 [2024-10-01 15:20:16.468963] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.648 [2024-10-01 15:20:16.468966] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.468970] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd900) on tqpair=0x95d760 00:23:06.648 [2024-10-01 15:20:16.468975] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:06.648 [2024-10-01 15:20:16.468979] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:06.648 [2024-10-01 15:20:16.468989] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.468993] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.469001] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95d760) 00:23:06.648 [2024-10-01 15:20:16.469008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.648 [2024-10-01 15:20:16.469019] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd900, cid 3, qid 0 00:23:06.648 [2024-10-01 15:20:16.469164] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.648 [2024-10-01 15:20:16.469170] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.648 [2024-10-01 15:20:16.469173] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.469177] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd900) on tqpair=0x95d760 00:23:06.648 [2024-10-01 15:20:16.469187] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.469191] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.469195] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95d760) 00:23:06.648 [2024-10-01 15:20:16.469201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.648 [2024-10-01 15:20:16.469211] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd900, cid 3, qid 0 00:23:06.648 [2024-10-01 15:20:16.469379] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.648 [2024-10-01 15:20:16.469386] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.648 [2024-10-01 15:20:16.469389] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.469393] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd900) on tqpair=0x95d760 00:23:06.648 [2024-10-01 15:20:16.469403] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.469407] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.469411] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95d760) 00:23:06.648 [2024-10-01 15:20:16.469417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.648 [2024-10-01 15:20:16.469427] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd900, cid 3, qid 0 00:23:06.648 [2024-10-01 15:20:16.469603] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.648 [2024-10-01 15:20:16.469609] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.648 [2024-10-01 15:20:16.469613] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.469617] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd900) on tqpair=0x95d760 00:23:06.648 [2024-10-01 15:20:16.469628] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.469632] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.469636] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95d760) 00:23:06.648 [2024-10-01 15:20:16.469643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.648 [2024-10-01 15:20:16.469653] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd900, cid 3, qid 0 00:23:06.648 [2024-10-01 15:20:16.469830] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.648 [2024-10-01 15:20:16.469836] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.648 [2024-10-01 15:20:16.469839] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.469843] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd900) on tqpair=0x95d760 00:23:06.648 [2024-10-01 15:20:16.469853] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.469857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.469860] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95d760) 00:23:06.648 [2024-10-01 15:20:16.469867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.648 [2024-10-01 15:20:16.469877] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd900, cid 3, qid 0 00:23:06.648 [2024-10-01 15:20:16.470035] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.648 [2024-10-01 15:20:16.470042] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.648 [2024-10-01 15:20:16.470046] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.470050] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd900) on tqpair=0x95d760 00:23:06.648 [2024-10-01 15:20:16.470059] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.470063] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.470067] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95d760) 00:23:06.648 [2024-10-01 15:20:16.470073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.648 [2024-10-01 15:20:16.470084] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd900, cid 3, qid 0 00:23:06.648 [2024-10-01 15:20:16.470260] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.648 [2024-10-01 15:20:16.470267] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.648 [2024-10-01 15:20:16.470270] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.470274] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd900) on tqpair=0x95d760 00:23:06.648 [2024-10-01 15:20:16.470284] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.470288] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.470291] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95d760) 00:23:06.648 [2024-10-01 15:20:16.470298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.648 [2024-10-01 15:20:16.470308] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd900, cid 3, qid 0 00:23:06.648 [2024-10-01 15:20:16.470450] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.648 [2024-10-01 15:20:16.470456] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.648 [2024-10-01 15:20:16.470460] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.470463] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd900) on tqpair=0x95d760 00:23:06.648 [2024-10-01 15:20:16.470473] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.470477] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.648 [2024-10-01 15:20:16.470482] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95d760) 00:23:06.648 [2024-10-01 15:20:16.470489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.648 [2024-10-01 15:20:16.470499] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd900, cid 3, qid 0 00:23:06.648 [2024-10-01 15:20:16.470645] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.648 [2024-10-01 15:20:16.470651] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.648 [2024-10-01 15:20:16.470654] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.649 [2024-10-01 15:20:16.470658] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd900) on tqpair=0x95d760 00:23:06.649 [2024-10-01 15:20:16.470668] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.649 [2024-10-01 15:20:16.470672] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.649 [2024-10-01 15:20:16.470675] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95d760) 00:23:06.649 [2024-10-01 15:20:16.470682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.649 [2024-10-01 15:20:16.470692] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd900, cid 3, qid 0 00:23:06.649 [2024-10-01 15:20:16.470870] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.649 [2024-10-01 15:20:16.470876] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.649 [2024-10-01 15:20:16.470879] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.649 [2024-10-01 15:20:16.470883] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd900) on tqpair=0x95d760 00:23:06.649 [2024-10-01 15:20:16.470893] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:06.649 [2024-10-01 15:20:16.470896] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:06.649 [2024-10-01 15:20:16.470900] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95d760) 00:23:06.649 [2024-10-01 15:20:16.470907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.649 [2024-10-01 15:20:16.470917] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bd900, cid 3, qid 0 00:23:06.649 [2024-10-01 15:20:16.475005] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:06.649 [2024-10-01 15:20:16.475013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:06.649 [2024-10-01 15:20:16.475017] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:06.649 [2024-10-01 15:20:16.475021] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bd900) on tqpair=0x95d760 00:23:06.649 [2024-10-01 15:20:16.475028] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:23:06.649 0% 00:23:06.649 Data Units Read: 0 00:23:06.649 Data Units Written: 0 00:23:06.649 Host Read Commands: 0 00:23:06.649 Host Write Commands: 0 00:23:06.649 Controller Busy Time: 0 minutes 00:23:06.649 Power Cycles: 0 00:23:06.649 Power On Hours: 0 hours 00:23:06.649 Unsafe Shutdowns: 0 00:23:06.649 Unrecoverable Media Errors: 0 00:23:06.649 Lifetime Error Log Entries: 0 00:23:06.649 Warning Temperature Time: 0 minutes 00:23:06.649 Critical Temperature Time: 0 minutes 00:23:06.649 00:23:06.649 Number of Queues 00:23:06.649 ================ 00:23:06.649 Number of I/O Submission Queues: 127 00:23:06.649 Number of I/O Completion Queues: 127 00:23:06.649 00:23:06.649 Active Namespaces 00:23:06.649 ================= 00:23:06.649 Namespace ID:1 00:23:06.649 Error Recovery Timeout: Unlimited 00:23:06.649 Command Set Identifier: NVM (00h) 00:23:06.649 Deallocate: Supported 00:23:06.649 Deallocated/Unwritten Error: Not Supported 00:23:06.649 Deallocated Read Value: Unknown 00:23:06.649 Deallocate in Write Zeroes: Not Supported 00:23:06.649 Deallocated Guard Field: 0xFFFF 00:23:06.649 Flush: Supported 00:23:06.649 Reservation: Supported 00:23:06.649 Namespace Sharing Capabilities: Multiple Controllers 00:23:06.649 Size (in LBAs): 131072 (0GiB) 00:23:06.649 Capacity (in LBAs): 131072 (0GiB) 00:23:06.649 Utilization (in LBAs): 131072 (0GiB) 00:23:06.649 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:06.649 EUI64: ABCDEF0123456789 00:23:06.649 UUID: 08d1eded-a9ed-44c8-b9e2-776477a55823 00:23:06.649 Thin Provisioning: Not Supported 00:23:06.649 Per-NS Atomic Units: Yes 00:23:06.649 Atomic Boundary Size (Normal): 0 00:23:06.649 Atomic Boundary Size (PFail): 0 00:23:06.649 Atomic Boundary Offset: 0 00:23:06.649 Maximum Single Source Range Length: 65535 00:23:06.649 Maximum Copy Length: 65535 00:23:06.649 Maximum Source Range Count: 1 00:23:06.649 NGUID/EUI64 Never Reused: No 00:23:06.649 Namespace Write Protected: No 00:23:06.649 Number of LBA Formats: 1 00:23:06.649 Current LBA Format: LBA Format #00 00:23:06.649 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:06.649 00:23:06.649 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:06.909 rmmod nvme_tcp 00:23:06.909 rmmod nvme_fabrics 00:23:06.909 rmmod nvme_keyring 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 4053123 ']' 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 4053123 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 4053123 ']' 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 4053123 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:06.909 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4053123 00:23:06.910 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:06.910 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:06.910 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4053123' 00:23:06.910 killing process with pid 4053123 00:23:06.910 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 4053123 00:23:06.910 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 4053123 00:23:07.170 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:07.170 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:07.170 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:07.170 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:07.170 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:23:07.170 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:07.170 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:23:07.171 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:07.171 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:07.171 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.171 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.171 15:20:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.082 15:20:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:09.082 00:23:09.082 real 0m11.304s 00:23:09.082 user 0m7.948s 00:23:09.082 sys 0m5.963s 00:23:09.082 15:20:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:09.082 15:20:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:09.082 ************************************ 00:23:09.082 END TEST nvmf_identify 00:23:09.082 ************************************ 00:23:09.082 15:20:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:09.082 15:20:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:09.082 15:20:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:09.082 15:20:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.342 ************************************ 00:23:09.342 START TEST nvmf_perf 00:23:09.342 ************************************ 00:23:09.342 15:20:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:09.342 * Looking for test storage... 00:23:09.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:09.342 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:09.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.343 --rc genhtml_branch_coverage=1 00:23:09.343 --rc genhtml_function_coverage=1 00:23:09.343 --rc genhtml_legend=1 00:23:09.343 --rc geninfo_all_blocks=1 00:23:09.343 --rc geninfo_unexecuted_blocks=1 00:23:09.343 00:23:09.343 ' 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:09.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.343 --rc genhtml_branch_coverage=1 00:23:09.343 --rc genhtml_function_coverage=1 00:23:09.343 --rc genhtml_legend=1 00:23:09.343 --rc geninfo_all_blocks=1 00:23:09.343 --rc geninfo_unexecuted_blocks=1 00:23:09.343 00:23:09.343 ' 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:09.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.343 --rc genhtml_branch_coverage=1 00:23:09.343 --rc genhtml_function_coverage=1 00:23:09.343 --rc genhtml_legend=1 00:23:09.343 --rc geninfo_all_blocks=1 00:23:09.343 --rc geninfo_unexecuted_blocks=1 00:23:09.343 00:23:09.343 ' 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:09.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.343 --rc genhtml_branch_coverage=1 00:23:09.343 --rc genhtml_function_coverage=1 00:23:09.343 --rc genhtml_legend=1 00:23:09.343 --rc geninfo_all_blocks=1 00:23:09.343 --rc geninfo_unexecuted_blocks=1 00:23:09.343 00:23:09.343 ' 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:09.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.343 15:20:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:17.481 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:17.481 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:17.481 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:17.481 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # is_hw=yes 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.481 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:17.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:23:17.482 00:23:17.482 --- 10.0.0.2 ping statistics --- 00:23:17.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.482 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:23:17.482 00:23:17.482 --- 10.0.0.1 ping statistics --- 00:23:17.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.482 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # return 0 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=4057505 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 4057505 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 4057505 ']' 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:17.482 15:20:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:17.482 [2024-10-01 15:20:26.639538] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:23:17.482 [2024-10-01 15:20:26.639605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.482 [2024-10-01 15:20:26.710501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.482 [2024-10-01 15:20:26.785110] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.482 [2024-10-01 15:20:26.785151] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.482 [2024-10-01 15:20:26.785160] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.482 [2024-10-01 15:20:26.785167] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.482 [2024-10-01 15:20:26.785173] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.482 [2024-10-01 15:20:26.785335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.482 [2024-10-01 15:20:26.785441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.482 [2024-10-01 15:20:26.785596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.482 [2024-10-01 15:20:26.785597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.741 15:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.741 15:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:17.741 15:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:17.741 15:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:17.741 15:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:17.741 15:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.741 15:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:17.741 15:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:18.312 15:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:18.312 15:20:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:18.312 15:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:18.312 15:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:18.572 15:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:18.572 15:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:18.572 15:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:18.572 15:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:18.572 15:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:18.832 [2024-10-01 15:20:28.509235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.832 15:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:19.093 15:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:19.093 15:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:19.093 15:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:19.093 15:20:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:19.353 15:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.613 [2024-10-01 15:20:29.223819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.613 15:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:19.613 15:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:19.613 15:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:19.613 15:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:19.613 15:20:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:20.997 Initializing NVMe Controllers 00:23:20.997 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:23:20.997 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:23:20.997 Initialization complete. Launching workers. 00:23:20.997 ======================================================== 00:23:20.997 Latency(us) 00:23:20.997 Device Information : IOPS MiB/s Average min max 00:23:20.997 PCIE (0000:65:00.0) NSID 1 from core 0: 79285.41 309.71 403.08 13.31 4790.16 00:23:20.997 ======================================================== 00:23:20.997 Total : 79285.41 309.71 403.08 13.31 4790.16 00:23:20.997 00:23:20.997 15:20:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:22.379 Initializing NVMe Controllers 00:23:22.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:22.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:22.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:22.379 Initialization complete. Launching workers. 00:23:22.379 ======================================================== 00:23:22.379 Latency(us) 00:23:22.379 Device Information : IOPS MiB/s Average min max 00:23:22.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.00 0.36 11201.54 112.43 44998.72 00:23:22.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 20519.67 5988.68 48887.67 00:23:22.379 ======================================================== 00:23:22.379 Total : 143.00 0.56 14524.79 112.43 48887.67 00:23:22.379 00:23:22.379 15:20:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:23.759 Initializing NVMe Controllers 00:23:23.759 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:23.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:23.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:23.759 Initialization complete. Launching workers. 00:23:23.759 ======================================================== 00:23:23.759 Latency(us) 00:23:23.759 Device Information : IOPS MiB/s Average min max 00:23:23.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10487.19 40.97 3089.94 490.90 45628.66 00:23:23.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3810.71 14.89 8452.07 6001.10 16770.01 00:23:23.760 ======================================================== 00:23:23.760 Total : 14297.90 55.85 4519.06 490.90 45628.66 00:23:23.760 00:23:23.760 15:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:23.760 15:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:23.760 15:20:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:26.296 Initializing NVMe Controllers 00:23:26.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:26.296 Controller IO queue size 128, less than required. 00:23:26.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:26.296 Controller IO queue size 128, less than required. 00:23:26.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:26.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:26.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:26.296 Initialization complete. Launching workers. 00:23:26.296 ======================================================== 00:23:26.296 Latency(us) 00:23:26.296 Device Information : IOPS MiB/s Average min max 00:23:26.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1672.66 418.16 77650.94 49728.97 111060.68 00:23:26.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.38 151.59 225266.27 69921.55 373452.58 00:23:26.296 ======================================================== 00:23:26.296 Total : 2279.04 569.76 116926.53 49728.97 373452.58 00:23:26.296 00:23:26.296 15:20:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:26.296 No valid NVMe controllers or AIO or URING devices found 00:23:26.296 Initializing NVMe Controllers 00:23:26.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:26.296 Controller IO queue size 128, less than required. 00:23:26.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:26.296 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:26.296 Controller IO queue size 128, less than required. 00:23:26.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:26.296 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:26.296 WARNING: Some requested NVMe devices were skipped 00:23:26.296 15:20:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:28.836 Initializing NVMe Controllers 00:23:28.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:28.836 Controller IO queue size 128, less than required. 00:23:28.836 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:28.836 Controller IO queue size 128, less than required. 00:23:28.836 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:28.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:28.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:28.836 Initialization complete. Launching workers. 00:23:28.836 00:23:28.836 ==================== 00:23:28.836 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:28.836 TCP transport: 00:23:28.836 polls: 23390 00:23:28.836 idle_polls: 14757 00:23:28.836 sock_completions: 8633 00:23:28.836 nvme_completions: 6621 00:23:28.836 submitted_requests: 10040 00:23:28.836 queued_requests: 1 00:23:28.836 00:23:28.836 ==================== 00:23:28.836 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:28.836 TCP transport: 00:23:28.836 polls: 19784 00:23:28.836 idle_polls: 10169 00:23:28.836 sock_completions: 9615 00:23:28.836 nvme_completions: 7019 00:23:28.836 submitted_requests: 10590 00:23:28.836 queued_requests: 1 00:23:28.836 ======================================================== 00:23:28.836 Latency(us) 00:23:28.836 Device Information : IOPS MiB/s Average min max 00:23:28.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1654.88 413.72 79441.26 41815.69 129070.58 00:23:28.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1754.37 438.59 73617.02 32162.28 116212.87 00:23:28.836 ======================================================== 00:23:28.836 Total : 3409.25 852.31 76444.15 32162.28 129070.58 00:23:28.836 00:23:28.836 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:28.836 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:29.096 rmmod nvme_tcp 00:23:29.096 rmmod nvme_fabrics 00:23:29.096 rmmod nvme_keyring 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 4057505 ']' 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 4057505 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 4057505 ']' 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 4057505 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4057505 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4057505' 00:23:29.096 killing process with pid 4057505 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 4057505 00:23:29.096 15:20:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 4057505 00:23:31.639 15:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:31.639 15:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:31.639 15:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:31.639 15:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:31.639 15:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:23:31.639 15:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:31.639 15:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:23:31.639 15:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.639 15:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:31.639 15:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.639 15:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.639 15:20:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.551 15:20:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.551 00:23:33.551 real 0m24.037s 00:23:33.551 user 0m57.830s 00:23:33.551 sys 0m8.513s 00:23:33.551 15:20:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:33.551 15:20:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:33.551 ************************************ 00:23:33.551 END TEST nvmf_perf 00:23:33.551 ************************************ 00:23:33.551 15:20:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:33.551 15:20:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:33.551 15:20:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:33.551 15:20:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.551 ************************************ 00:23:33.551 START TEST nvmf_fio_host 00:23:33.551 ************************************ 00:23:33.551 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:33.551 * Looking for test storage... 00:23:33.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:33.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.552 --rc genhtml_branch_coverage=1 00:23:33.552 --rc genhtml_function_coverage=1 00:23:33.552 --rc genhtml_legend=1 00:23:33.552 --rc geninfo_all_blocks=1 00:23:33.552 --rc geninfo_unexecuted_blocks=1 00:23:33.552 00:23:33.552 ' 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:33.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.552 --rc genhtml_branch_coverage=1 00:23:33.552 --rc genhtml_function_coverage=1 00:23:33.552 --rc genhtml_legend=1 00:23:33.552 --rc geninfo_all_blocks=1 00:23:33.552 --rc geninfo_unexecuted_blocks=1 00:23:33.552 00:23:33.552 ' 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:33.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.552 --rc genhtml_branch_coverage=1 00:23:33.552 --rc genhtml_function_coverage=1 00:23:33.552 --rc genhtml_legend=1 00:23:33.552 --rc geninfo_all_blocks=1 00:23:33.552 --rc geninfo_unexecuted_blocks=1 00:23:33.552 00:23:33.552 ' 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:33.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.552 --rc genhtml_branch_coverage=1 00:23:33.552 --rc genhtml_function_coverage=1 00:23:33.552 --rc genhtml_legend=1 00:23:33.552 --rc geninfo_all_blocks=1 00:23:33.552 --rc geninfo_unexecuted_blocks=1 00:23:33.552 00:23:33.552 ' 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:33.552 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.553 15:20:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:41.778 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:41.778 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:41.778 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.778 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:41.779 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # is_hw=yes 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:41.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:23:41.779 00:23:41.779 --- 10.0.0.2 ping statistics --- 00:23:41.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.779 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:23:41.779 00:23:41.779 --- 10.0.0.1 ping statistics --- 00:23:41.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.779 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # return 0 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4064543 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4064543 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 4064543 ']' 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.779 15:20:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.779 [2024-10-01 15:20:50.687668] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:23:41.779 [2024-10-01 15:20:50.687731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.779 [2024-10-01 15:20:50.758552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:41.779 [2024-10-01 15:20:50.833294] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.779 [2024-10-01 15:20:50.833333] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.779 [2024-10-01 15:20:50.833341] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.779 [2024-10-01 15:20:50.833347] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.779 [2024-10-01 15:20:50.833353] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.779 [2024-10-01 15:20:50.833494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.779 [2024-10-01 15:20:50.833610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.779 [2024-10-01 15:20:50.833768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.779 [2024-10-01 15:20:50.833768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.779 15:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.779 15:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:23:41.779 15:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:42.039 [2024-10-01 15:20:51.650331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.039 15:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:42.039 15:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:42.039 15:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.039 15:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:42.039 Malloc1 00:23:42.299 15:20:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:42.299 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:42.558 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.819 [2024-10-01 15:20:52.440119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:42.819 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:43.100 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:43.100 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:43.100 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:43.100 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:43.100 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:43.100 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:43.100 15:20:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:43.364 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:43.364 fio-3.35 00:23:43.364 Starting 1 thread 00:23:45.927 00:23:45.927 test: (groupid=0, jobs=1): err= 0: pid=4065202: Tue Oct 1 15:20:55 2024 00:23:45.927 read: IOPS=13.9k, BW=54.4MiB/s (57.0MB/s)(109MiB/2004msec) 00:23:45.927 slat (usec): min=2, max=310, avg= 2.16, stdev= 2.56 00:23:45.927 clat (usec): min=3336, max=8896, avg=5048.59, stdev=359.94 00:23:45.927 lat (usec): min=3339, max=8898, avg=5050.75, stdev=360.15 00:23:45.927 clat percentiles (usec): 00:23:45.927 | 1.00th=[ 4228], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4752], 00:23:45.927 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 5014], 60.00th=[ 5145], 00:23:45.927 | 70.00th=[ 5211], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:23:45.927 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 7832], 99.95th=[ 8160], 00:23:45.927 | 99.99th=[ 8717] 00:23:45.927 bw ( KiB/s): min=54240, max=56256, per=100.00%, avg=55702.00, stdev=976.73, samples=4 00:23:45.927 iops : min=13560, max=14066, avg=13925.50, stdev=244.22, samples=4 00:23:45.927 write: IOPS=13.9k, BW=54.5MiB/s (57.1MB/s)(109MiB/2004msec); 0 zone resets 00:23:45.927 slat (usec): min=2, max=284, avg= 2.22, stdev= 1.85 00:23:45.927 clat (usec): min=2710, max=7991, avg=4079.85, stdev=304.91 00:23:45.927 lat (usec): min=2712, max=7993, avg=4082.07, stdev=305.15 00:23:45.927 clat percentiles (usec): 00:23:45.927 | 1.00th=[ 3425], 5.00th=[ 3621], 10.00th=[ 3752], 20.00th=[ 3851], 00:23:45.927 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:23:45.927 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4490], 00:23:45.927 | 99.00th=[ 4817], 99.50th=[ 5538], 99.90th=[ 6456], 99.95th=[ 6783], 00:23:45.927 | 99.99th=[ 7177] 00:23:45.927 bw ( KiB/s): min=54664, max=56192, per=99.95%, avg=55746.00, stdev=723.85, samples=4 00:23:45.927 iops : min=13666, max=14048, avg=13936.50, stdev=180.96, samples=4 00:23:45.927 lat (msec) : 4=19.53%, 10=80.47% 00:23:45.927 cpu : usr=73.39%, sys=25.36%, ctx=35, majf=0, minf=9 00:23:45.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:45.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:45.927 issued rwts: total=27907,27944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:45.927 00:23:45.927 Run status group 0 (all jobs): 00:23:45.927 READ: bw=54.4MiB/s (57.0MB/s), 54.4MiB/s-54.4MiB/s (57.0MB/s-57.0MB/s), io=109MiB (114MB), run=2004-2004msec 00:23:45.927 WRITE: bw=54.5MiB/s (57.1MB/s), 54.5MiB/s-54.5MiB/s (57.1MB/s-57.1MB/s), io=109MiB (114MB), run=2004-2004msec 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:45.927 15:20:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:46.188 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:46.188 fio-3.35 00:23:46.188 Starting 1 thread 00:23:48.730 [2024-10-01 15:20:57.992023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x926a10 is same with the state(6) to be set 00:23:48.730 00:23:48.730 test: (groupid=0, jobs=1): err= 0: pid=4065902: Tue Oct 1 15:20:58 2024 00:23:48.730 read: IOPS=9234, BW=144MiB/s (151MB/s)(289MiB/2006msec) 00:23:48.730 slat (usec): min=3, max=110, avg= 3.59, stdev= 1.59 00:23:48.730 clat (usec): min=1856, max=48302, avg=8395.73, stdev=3299.33 00:23:48.730 lat (usec): min=1860, max=48306, avg=8399.32, stdev=3299.39 00:23:48.730 clat percentiles (usec): 00:23:48.730 | 1.00th=[ 4424], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6587], 00:23:48.730 | 30.00th=[ 7111], 40.00th=[ 7635], 50.00th=[ 8094], 60.00th=[ 8586], 00:23:48.730 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[10683], 95.00th=[11076], 00:23:48.730 | 99.00th=[12911], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:23:48.730 | 99.99th=[48497] 00:23:48.730 bw ( KiB/s): min=58624, max=85728, per=49.41%, avg=73008.00, stdev=11516.34, samples=4 00:23:48.730 iops : min= 3664, max= 5358, avg=4563.00, stdev=719.77, samples=4 00:23:48.730 write: IOPS=5456, BW=85.3MiB/s (89.4MB/s)(149MiB/1753msec); 0 zone resets 00:23:48.730 slat (usec): min=39, max=358, avg=40.83, stdev= 6.87 00:23:48.730 clat (usec): min=1804, max=51782, avg=9566.71, stdev=2592.90 00:23:48.730 lat (usec): min=1843, max=51822, avg=9607.54, stdev=2593.44 00:23:48.730 clat percentiles (usec): 00:23:48.730 | 1.00th=[ 6652], 5.00th=[ 7373], 10.00th=[ 7767], 20.00th=[ 8291], 00:23:48.730 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:23:48.730 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11469], 95.00th=[12256], 00:23:48.730 | 99.00th=[13960], 99.50th=[14484], 99.90th=[49546], 99.95th=[50070], 00:23:48.730 | 99.99th=[51643] 00:23:48.730 bw ( KiB/s): min=60320, max=89120, per=86.92%, avg=75880.00, stdev=12351.84, samples=4 00:23:48.730 iops : min= 3770, max= 5570, avg=4742.50, stdev=771.99, samples=4 00:23:48.730 lat (msec) : 2=0.02%, 4=0.34%, 10=75.65%, 20=23.54%, 50=0.43% 00:23:48.730 lat (msec) : 100=0.02% 00:23:48.730 cpu : usr=84.49%, sys=14.16%, ctx=15, majf=0, minf=33 00:23:48.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:23:48.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:48.730 issued rwts: total=18525,9565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:48.730 00:23:48.730 Run status group 0 (all jobs): 00:23:48.730 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=289MiB (304MB), run=2006-2006msec 00:23:48.730 WRITE: bw=85.3MiB/s (89.4MB/s), 85.3MiB/s-85.3MiB/s (89.4MB/s-89.4MB/s), io=149MiB (157MB), run=1753-1753msec 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.730 rmmod nvme_tcp 00:23:48.730 rmmod nvme_fabrics 00:23:48.730 rmmod nvme_keyring 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 4064543 ']' 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 4064543 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 4064543 ']' 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 4064543 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4064543 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4064543' 00:23:48.730 killing process with pid 4064543 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 4064543 00:23:48.730 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 4064543 00:23:48.991 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:48.991 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:48.991 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:48.991 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:48.991 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:23:48.991 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:48.991 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:23:48.991 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:48.991 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:48.991 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.991 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.991 15:20:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.903 15:21:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:50.903 00:23:50.903 real 0m17.692s 00:23:50.903 user 1m10.940s 00:23:50.903 sys 0m7.554s 00:23:50.903 15:21:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:50.903 15:21:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.903 ************************************ 00:23:50.903 END TEST nvmf_fio_host 00:23:50.903 ************************************ 00:23:51.165 15:21:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:51.165 15:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:51.165 15:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:51.165 15:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.165 ************************************ 00:23:51.165 START TEST nvmf_failover 00:23:51.165 ************************************ 00:23:51.165 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:51.165 * Looking for test storage... 00:23:51.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:51.165 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:51.165 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:23:51.165 15:21:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:51.165 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:51.165 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.165 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.165 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.165 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.165 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.165 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.165 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.165 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.165 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.165 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.165 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.426 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:51.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.427 --rc genhtml_branch_coverage=1 00:23:51.427 --rc genhtml_function_coverage=1 00:23:51.427 --rc genhtml_legend=1 00:23:51.427 --rc geninfo_all_blocks=1 00:23:51.427 --rc geninfo_unexecuted_blocks=1 00:23:51.427 00:23:51.427 ' 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:51.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.427 --rc genhtml_branch_coverage=1 00:23:51.427 --rc genhtml_function_coverage=1 00:23:51.427 --rc genhtml_legend=1 00:23:51.427 --rc geninfo_all_blocks=1 00:23:51.427 --rc geninfo_unexecuted_blocks=1 00:23:51.427 00:23:51.427 ' 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:51.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.427 --rc genhtml_branch_coverage=1 00:23:51.427 --rc genhtml_function_coverage=1 00:23:51.427 --rc genhtml_legend=1 00:23:51.427 --rc geninfo_all_blocks=1 00:23:51.427 --rc geninfo_unexecuted_blocks=1 00:23:51.427 00:23:51.427 ' 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:51.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.427 --rc genhtml_branch_coverage=1 00:23:51.427 --rc genhtml_function_coverage=1 00:23:51.427 --rc genhtml_legend=1 00:23:51.427 --rc geninfo_all_blocks=1 00:23:51.427 --rc geninfo_unexecuted_blocks=1 00:23:51.427 00:23:51.427 ' 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:51.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:51.427 15:21:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:59.571 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:59.571 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:59.572 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:59.572 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ up == up ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:59.572 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # is_hw=yes 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:59.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:23:59.572 00:23:59.572 --- 10.0.0.2 ping statistics --- 00:23:59.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.572 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:23:59.572 00:23:59.572 --- 10.0.0.1 ping statistics --- 00:23:59.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.572 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # return 0 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=4070904 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 4070904 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 4070904 ']' 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:59.572 15:21:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:59.572 [2024-10-01 15:21:08.653025] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:23:59.573 [2024-10-01 15:21:08.653095] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.573 [2024-10-01 15:21:08.745041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:59.573 [2024-10-01 15:21:08.838912] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.573 [2024-10-01 15:21:08.838976] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.573 [2024-10-01 15:21:08.838985] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.573 [2024-10-01 15:21:08.838992] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.573 [2024-10-01 15:21:08.839007] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.573 [2024-10-01 15:21:08.839136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.573 [2024-10-01 15:21:08.839472] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.573 [2024-10-01 15:21:08.839473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.832 15:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:59.832 15:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:59.832 15:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:59.832 15:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:59.832 15:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:59.832 15:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.832 15:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:59.832 [2024-10-01 15:21:09.663519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.092 15:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:00.092 Malloc0 00:24:00.092 15:21:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:00.372 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:00.631 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.631 [2024-10-01 15:21:10.448744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.631 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:00.890 [2024-10-01 15:21:10.633245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:00.890 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:01.151 [2024-10-01 15:21:10.817820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:01.151 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:01.151 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4071503 00:24:01.151 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:01.151 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4071503 /var/tmp/bdevperf.sock 00:24:01.151 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 4071503 ']' 00:24:01.151 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.151 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.151 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.151 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.151 15:21:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:02.090 15:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.090 15:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:02.090 15:21:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:02.351 NVMe0n1 00:24:02.351 15:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:02.610 00:24:02.870 15:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.870 15:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4071830 00:24:02.870 15:21:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:03.810 15:21:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.810 [2024-10-01 15:21:13.615803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.615988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 [2024-10-01 15:21:13.616079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81010 is same with the state(6) to be set 00:24:03.810 15:21:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:07.105 15:21:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:07.366 00:24:07.366 15:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:07.366 [2024-10-01 15:21:17.214045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.366 [2024-10-01 15:21:17.214336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.367 [2024-10-01 15:21:17.214673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e81dc0 is same with the state(6) to be set 00:24:07.627 15:21:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:10.924 15:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.924 [2024-10-01 15:21:20.404293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.924 15:21:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:11.864 15:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:11.864 [2024-10-01 15:21:21.590632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82d10 is same with the state(6) to be set 00:24:11.864 [2024-10-01 15:21:21.590667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82d10 is same with the state(6) to be set 00:24:11.864 [2024-10-01 15:21:21.590673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82d10 is same with the state(6) to be set 00:24:11.864 [2024-10-01 15:21:21.590678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82d10 is same with the state(6) to be set 00:24:11.864 [2024-10-01 15:21:21.590683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82d10 is same with the state(6) to be set 00:24:11.864 [2024-10-01 15:21:21.590688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82d10 is same with the state(6) to be set 00:24:11.864 [2024-10-01 15:21:21.590693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82d10 is same with the state(6) to be set 00:24:11.864 15:21:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 4071830 00:24:18.456 { 00:24:18.456 "results": [ 00:24:18.456 { 00:24:18.456 "job": "NVMe0n1", 00:24:18.456 "core_mask": "0x1", 00:24:18.456 "workload": "verify", 00:24:18.456 "status": "finished", 00:24:18.456 "verify_range": { 00:24:18.456 "start": 0, 00:24:18.456 "length": 16384 00:24:18.456 }, 00:24:18.456 "queue_depth": 128, 00:24:18.456 "io_size": 4096, 00:24:18.456 "runtime": 15.011225, 00:24:18.456 "iops": 11179.900374553043, 00:24:18.456 "mibps": 43.67148583809782, 00:24:18.456 "io_failed": 6821, 00:24:18.456 "io_timeout": 0, 00:24:18.456 "avg_latency_us": 10973.822388807774, 00:24:18.456 "min_latency_us": 778.24, 00:24:18.456 "max_latency_us": 16274.773333333333 00:24:18.456 } 00:24:18.456 ], 00:24:18.456 "core_count": 1 00:24:18.456 } 00:24:18.456 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 4071503 00:24:18.456 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 4071503 ']' 00:24:18.456 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 4071503 00:24:18.456 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:18.456 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:18.456 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4071503 00:24:18.457 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:18.457 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:18.457 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4071503' 00:24:18.457 killing process with pid 4071503 00:24:18.457 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 4071503 00:24:18.457 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 4071503 00:24:18.457 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:18.457 [2024-10-01 15:21:10.889261] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:24:18.457 [2024-10-01 15:21:10.889322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4071503 ] 00:24:18.457 [2024-10-01 15:21:10.950403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.457 [2024-10-01 15:21:11.014898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.457 Running I/O for 15 seconds... 00:24:18.457 11476.00 IOPS, 44.83 MiB/s [2024-10-01 15:21:13.618199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.457 [2024-10-01 15:21:13.618859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.457 [2024-10-01 15:21:13.618867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.618877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.458 [2024-10-01 15:21:13.618885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.618897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.458 [2024-10-01 15:21:13.618905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.618915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.458 [2024-10-01 15:21:13.618923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.618933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.458 [2024-10-01 15:21:13.618941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.618951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.458 [2024-10-01 15:21:13.618959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.618969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.458 [2024-10-01 15:21:13.618977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.618988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.458 [2024-10-01 15:21:13.619001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.458 [2024-10-01 15:21:13.619018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.458 [2024-10-01 15:21:13.619035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.458 [2024-10-01 15:21:13.619053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.458 [2024-10-01 15:21:13.619071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.458 [2024-10-01 15:21:13.619594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.458 [2024-10-01 15:21:13.619602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.459 [2024-10-01 15:21:13.619936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.459 [2024-10-01 15:21:13.619970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99376 len:8 PRP1 0x0 PRP2 0x0 00:24:18.459 [2024-10-01 15:21:13.619978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.619990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.459 [2024-10-01 15:21:13.620000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.459 [2024-10-01 15:21:13.620007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99384 len:8 PRP1 0x0 PRP2 0x0 00:24:18.459 [2024-10-01 15:21:13.620014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.620022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.459 [2024-10-01 15:21:13.620028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.459 [2024-10-01 15:21:13.620035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99392 len:8 PRP1 0x0 PRP2 0x0 00:24:18.459 [2024-10-01 15:21:13.620042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.620050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.459 [2024-10-01 15:21:13.620056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.459 [2024-10-01 15:21:13.620065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99400 len:8 PRP1 0x0 PRP2 0x0 00:24:18.459 [2024-10-01 15:21:13.620073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.620080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.459 [2024-10-01 15:21:13.620087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.459 [2024-10-01 15:21:13.620093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99408 len:8 PRP1 0x0 PRP2 0x0 00:24:18.459 [2024-10-01 15:21:13.620102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.620111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.459 [2024-10-01 15:21:13.620117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.459 [2024-10-01 15:21:13.620124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99416 len:8 PRP1 0x0 PRP2 0x0 00:24:18.459 [2024-10-01 15:21:13.620132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.620140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.459 [2024-10-01 15:21:13.620146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.459 [2024-10-01 15:21:13.620152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99424 len:8 PRP1 0x0 PRP2 0x0 00:24:18.459 [2024-10-01 15:21:13.620160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.620168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.459 [2024-10-01 15:21:13.620174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.459 [2024-10-01 15:21:13.620180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99432 len:8 PRP1 0x0 PRP2 0x0 00:24:18.459 [2024-10-01 15:21:13.620188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.620195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.459 [2024-10-01 15:21:13.620201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.459 [2024-10-01 15:21:13.620209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99440 len:8 PRP1 0x0 PRP2 0x0 00:24:18.459 [2024-10-01 15:21:13.620216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.620224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.459 [2024-10-01 15:21:13.620230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.459 [2024-10-01 15:21:13.620237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99448 len:8 PRP1 0x0 PRP2 0x0 00:24:18.459 [2024-10-01 15:21:13.620244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.620253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.459 [2024-10-01 15:21:13.620259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.459 [2024-10-01 15:21:13.620265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99456 len:8 PRP1 0x0 PRP2 0x0 00:24:18.459 [2024-10-01 15:21:13.620273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.620283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.459 [2024-10-01 15:21:13.620289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.459 [2024-10-01 15:21:13.620296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99464 len:8 PRP1 0x0 PRP2 0x0 00:24:18.459 [2024-10-01 15:21:13.620303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.459 [2024-10-01 15:21:13.620311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.459 [2024-10-01 15:21:13.620317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.459 [2024-10-01 15:21:13.620324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99472 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99480 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99488 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99496 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99504 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99512 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99520 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99528 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99536 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99544 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99552 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99560 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99568 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99576 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99584 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99592 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99600 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99608 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99616 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99624 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99632 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.460 [2024-10-01 15:21:13.620917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.460 [2024-10-01 15:21:13.620923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99640 len:8 PRP1 0x0 PRP2 0x0 00:24:18.460 [2024-10-01 15:21:13.620931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.620967] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f87710 was disconnected and freed. reset controller. 00:24:18.460 [2024-10-01 15:21:13.620979] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:18.460 [2024-10-01 15:21:13.621004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.460 [2024-10-01 15:21:13.621013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.621023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.460 [2024-10-01 15:21:13.621030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.460 [2024-10-01 15:21:13.621039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.460 [2024-10-01 15:21:13.621046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:13.621054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.461 [2024-10-01 15:21:13.621063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:13.621070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.461 [2024-10-01 15:21:13.621106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f66e40 (9): Bad file descriptor 00:24:18.461 [2024-10-01 15:21:13.624639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.461 [2024-10-01 15:21:13.666893] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:18.461 11365.50 IOPS, 44.40 MiB/s 11286.33 IOPS, 44.09 MiB/s 11270.50 IOPS, 44.03 MiB/s [2024-10-01 15:21:17.215028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.461 [2024-10-01 15:21:17.215603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.461 [2024-10-01 15:21:17.215610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.215987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.215999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.462 [2024-10-01 15:21:17.216317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.462 [2024-10-01 15:21:17.216325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.463 [2024-10-01 15:21:17.216924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.463 [2024-10-01 15:21:17.216942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.463 [2024-10-01 15:21:17.216959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.463 [2024-10-01 15:21:17.216979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.216989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.463 [2024-10-01 15:21:17.217001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.217012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.463 [2024-10-01 15:21:17.217020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.217031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.463 [2024-10-01 15:21:17.217039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.463 [2024-10-01 15:21:17.217048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:17.217322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.464 [2024-10-01 15:21:17.217350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.464 [2024-10-01 15:21:17.217357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37776 len:8 PRP1 0x0 PRP2 0x0 00:24:18.464 [2024-10-01 15:21:17.217366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217404] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f89860 was disconnected and freed. reset controller. 00:24:18.464 [2024-10-01 15:21:17.217414] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:18.464 [2024-10-01 15:21:17.217435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.464 [2024-10-01 15:21:17.217444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.464 [2024-10-01 15:21:17.217460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.464 [2024-10-01 15:21:17.217477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.464 [2024-10-01 15:21:17.217495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:17.217503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.464 [2024-10-01 15:21:17.221052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.464 [2024-10-01 15:21:17.221081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f66e40 (9): Bad file descriptor 00:24:18.464 [2024-10-01 15:21:17.297682] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:18.464 11112.20 IOPS, 43.41 MiB/s 11161.67 IOPS, 43.60 MiB/s 11207.43 IOPS, 43.78 MiB/s 11218.25 IOPS, 43.82 MiB/s 11220.56 IOPS, 43.83 MiB/s [2024-10-01 15:21:21.591501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.464 [2024-10-01 15:21:21.591538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.464 [2024-10-01 15:21:21.591815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.464 [2024-10-01 15:21:21.591822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.591832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.591840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.591850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.591858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.591867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.591875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.591886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.591894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.591904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.591912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.591921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.591930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.591946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.591954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.591964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.591972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.591983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.591991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.465 [2024-10-01 15:21:21.592482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.465 [2024-10-01 15:21:21.592490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.592977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.592988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.593009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.593020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.593029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.593040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.593048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.593059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.593068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.593078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.593085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.593096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.593104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.593115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.593123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.593134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.593142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.593156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.593164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.593175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:18.466 [2024-10-01 15:21:21.593184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.593207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.466 [2024-10-01 15:21:21.593215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57040 len:8 PRP1 0x0 PRP2 0x0 00:24:18.466 [2024-10-01 15:21:21.593223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.593235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.466 [2024-10-01 15:21:21.593241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.466 [2024-10-01 15:21:21.593249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57048 len:8 PRP1 0x0 PRP2 0x0 00:24:18.466 [2024-10-01 15:21:21.593259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.466 [2024-10-01 15:21:21.593269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.466 [2024-10-01 15:21:21.593276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.466 [2024-10-01 15:21:21.593282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57056 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57064 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57072 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57080 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57088 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57096 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57104 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57112 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57120 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57128 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57136 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57144 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57152 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57160 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57168 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57176 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57184 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57192 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57200 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57208 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57216 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57224 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57232 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.467 [2024-10-01 15:21:21.593951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57240 len:8 PRP1 0x0 PRP2 0x0 00:24:18.467 [2024-10-01 15:21:21.593959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.467 [2024-10-01 15:21:21.593966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.467 [2024-10-01 15:21:21.593973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.593979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57248 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.593987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.593998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57256 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57264 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57272 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57280 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57288 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57296 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57304 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57312 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57320 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57328 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57336 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57352 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:18.468 [2024-10-01 15:21:21.594379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:18.468 [2024-10-01 15:21:21.594386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57360 len:8 PRP1 0x0 PRP2 0x0 00:24:18.468 [2024-10-01 15:21:21.594393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594429] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f89520 was disconnected and freed. reset controller. 00:24:18.468 [2024-10-01 15:21:21.594439] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:18.468 [2024-10-01 15:21:21.594460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.468 [2024-10-01 15:21:21.594469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.468 [2024-10-01 15:21:21.594485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.468 [2024-10-01 15:21:21.594502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.468 [2024-10-01 15:21:21.594518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.468 [2024-10-01 15:21:21.594526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:18.468 [2024-10-01 15:21:21.598064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:18.468 [2024-10-01 15:21:21.598091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f66e40 (9): Bad file descriptor 00:24:18.468 [2024-10-01 15:21:21.677466] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:18.468 11151.90 IOPS, 43.56 MiB/s 11153.00 IOPS, 43.57 MiB/s 11159.83 IOPS, 43.59 MiB/s 11182.08 IOPS, 43.68 MiB/s 11185.43 IOPS, 43.69 MiB/s 11183.80 IOPS, 43.69 MiB/s 00:24:18.468 Latency(us) 00:24:18.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.468 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:18.468 Verification LBA range: start 0x0 length 0x4000 00:24:18.468 NVMe0n1 : 15.01 11179.90 43.67 454.39 0.00 10973.82 778.24 16274.77 00:24:18.468 =================================================================================================================== 00:24:18.468 Total : 11179.90 43.67 454.39 0.00 10973.82 778.24 16274.77 00:24:18.468 Received shutdown signal, test time was about 15.000000 seconds 00:24:18.468 00:24:18.468 Latency(us) 00:24:18.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.468 =================================================================================================================== 00:24:18.468 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.468 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:18.468 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:18.468 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:18.468 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4074844 00:24:18.468 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4074844 /var/tmp/bdevperf.sock 00:24:18.468 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:18.468 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 4074844 ']' 00:24:18.468 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.468 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:18.468 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.469 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:18.469 15:21:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:19.174 15:21:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:19.174 15:21:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:19.174 15:21:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:19.174 [2024-10-01 15:21:28.835838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:19.174 15:21:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:19.174 [2024-10-01 15:21:29.020311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:19.456 15:21:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:19.715 NVMe0n1 00:24:19.715 15:21:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.284 00:24:20.284 15:21:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.284 00:24:20.544 15:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:20.544 15:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:20.544 15:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.803 15:21:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:24.098 15:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:24.098 15:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:24.098 15:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4075982 00:24:24.098 15:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:24.098 15:21:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 4075982 00:24:25.038 { 00:24:25.038 "results": [ 00:24:25.038 { 00:24:25.038 "job": "NVMe0n1", 00:24:25.038 "core_mask": "0x1", 00:24:25.038 "workload": "verify", 00:24:25.038 "status": "finished", 00:24:25.038 "verify_range": { 00:24:25.038 "start": 0, 00:24:25.038 "length": 16384 00:24:25.038 }, 00:24:25.038 "queue_depth": 128, 00:24:25.038 "io_size": 4096, 00:24:25.038 "runtime": 1.009637, 00:24:25.038 "iops": 11441.735990261846, 00:24:25.038 "mibps": 44.69428121196034, 00:24:25.038 "io_failed": 0, 00:24:25.038 "io_timeout": 0, 00:24:25.038 "avg_latency_us": 11132.583416435826, 00:24:25.038 "min_latency_us": 1665.7066666666667, 00:24:25.038 "max_latency_us": 10594.986666666666 00:24:25.038 } 00:24:25.038 ], 00:24:25.038 "core_count": 1 00:24:25.038 } 00:24:25.038 15:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:25.038 [2024-10-01 15:21:27.878863] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:24:25.038 [2024-10-01 15:21:27.878926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074844 ] 00:24:25.038 [2024-10-01 15:21:27.938681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.038 [2024-10-01 15:21:28.001205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.038 [2024-10-01 15:21:30.507640] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:25.038 [2024-10-01 15:21:30.507691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.038 [2024-10-01 15:21:30.507703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.038 [2024-10-01 15:21:30.507713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.038 [2024-10-01 15:21:30.507721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.038 [2024-10-01 15:21:30.507729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.038 [2024-10-01 15:21:30.507737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.038 [2024-10-01 15:21:30.507745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.038 [2024-10-01 15:21:30.507752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.038 [2024-10-01 15:21:30.507760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:25.038 [2024-10-01 15:21:30.507787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:25.038 [2024-10-01 15:21:30.507802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f48e40 (9): Bad file descriptor 00:24:25.038 [2024-10-01 15:21:30.512950] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:25.038 Running I/O for 1 seconds... 00:24:25.038 11406.00 IOPS, 44.55 MiB/s 00:24:25.038 Latency(us) 00:24:25.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.038 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:25.038 Verification LBA range: start 0x0 length 0x4000 00:24:25.038 NVMe0n1 : 1.01 11441.74 44.69 0.00 0.00 11132.58 1665.71 10594.99 00:24:25.038 =================================================================================================================== 00:24:25.038 Total : 11441.74 44.69 0.00 0.00 11132.58 1665.71 10594.99 00:24:25.038 15:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.038 15:21:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:25.299 15:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:25.559 15:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.559 15:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:25.559 15:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:25.819 15:21:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:29.115 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:29.115 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:29.115 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 4074844 00:24:29.115 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 4074844 ']' 00:24:29.115 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 4074844 00:24:29.115 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:29.115 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:29.115 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4074844 00:24:29.115 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:29.115 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:29.115 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4074844' 00:24:29.115 killing process with pid 4074844 00:24:29.115 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 4074844 00:24:29.115 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 4074844 00:24:29.379 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:29.379 15:21:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.379 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:29.379 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:29.379 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:29.379 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:29.379 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:29.379 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.379 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:29.379 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.379 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:29.379 rmmod nvme_tcp 00:24:29.379 rmmod nvme_fabrics 00:24:29.379 rmmod nvme_keyring 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 4070904 ']' 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 4070904 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 4070904 ']' 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 4070904 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4070904 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4070904' 00:24:29.639 killing process with pid 4070904 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 4070904 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 4070904 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.639 15:21:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:32.179 00:24:32.179 real 0m40.709s 00:24:32.179 user 2m5.410s 00:24:32.179 sys 0m8.561s 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:32.179 ************************************ 00:24:32.179 END TEST nvmf_failover 00:24:32.179 ************************************ 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.179 ************************************ 00:24:32.179 START TEST nvmf_host_discovery 00:24:32.179 ************************************ 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:32.179 * Looking for test storage... 00:24:32.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.179 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:32.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.180 --rc genhtml_branch_coverage=1 00:24:32.180 --rc genhtml_function_coverage=1 00:24:32.180 --rc genhtml_legend=1 00:24:32.180 --rc geninfo_all_blocks=1 00:24:32.180 --rc geninfo_unexecuted_blocks=1 00:24:32.180 00:24:32.180 ' 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:32.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.180 --rc genhtml_branch_coverage=1 00:24:32.180 --rc genhtml_function_coverage=1 00:24:32.180 --rc genhtml_legend=1 00:24:32.180 --rc geninfo_all_blocks=1 00:24:32.180 --rc geninfo_unexecuted_blocks=1 00:24:32.180 00:24:32.180 ' 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:32.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.180 --rc genhtml_branch_coverage=1 00:24:32.180 --rc genhtml_function_coverage=1 00:24:32.180 --rc genhtml_legend=1 00:24:32.180 --rc geninfo_all_blocks=1 00:24:32.180 --rc geninfo_unexecuted_blocks=1 00:24:32.180 00:24:32.180 ' 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:32.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.180 --rc genhtml_branch_coverage=1 00:24:32.180 --rc genhtml_function_coverage=1 00:24:32.180 --rc genhtml_legend=1 00:24:32.180 --rc geninfo_all_blocks=1 00:24:32.180 --rc geninfo_unexecuted_blocks=1 00:24:32.180 00:24:32.180 ' 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:32.180 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ phy != virt ]] 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # gather_supported_nvmf_pci_devs 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:32.181 15:21:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # pci_devs+=("${e810[@]}") 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # [[ tcp == rdma ]] 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ e810 == mlx5 ]] 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == e810 ]] 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # pci_devs=("${e810[@]}") 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@359 -- # (( 2 == 0 )) 00:24:40.324 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:40.325 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@364 -- # for pci in "${pci_devs[@]}" 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@365 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:40.325 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # [[ ice == unknown ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # [[ ice == unbound ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@374 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@375 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # (( 0 > 0 )) 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ e810 == e810 ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@396 -- # [[ tcp == rdma ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:40.325 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@406 -- # for pci in "${pci_devs[@]}" 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@407 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # [[ tcp == tcp ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@413 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ up == up ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # (( 1 == 0 )) 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@424 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:40.325 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # (( 2 == 0 )) 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # is_hw=yes 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ yes == yes ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # [[ tcp == tcp ]] 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # nvmf_tcp_init 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:40.325 15:21:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:40.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:24:40.325 00:24:40.325 --- 10.0.0.2 ping statistics --- 00:24:40.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.325 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:24:40.325 00:24:40.325 --- 10.0.0.1 ping statistics --- 00:24:40.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.325 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # return 0 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=4081213 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 4081213 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 4081213 ']' 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.325 15:21:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.325 [2024-10-01 15:21:49.224458] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:24:40.325 [2024-10-01 15:21:49.224509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.325 [2024-10-01 15:21:49.310020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.325 [2024-10-01 15:21:49.384216] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.325 [2024-10-01 15:21:49.384267] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.325 [2024-10-01 15:21:49.384276] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.325 [2024-10-01 15:21:49.384283] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.325 [2024-10-01 15:21:49.384289] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.325 [2024-10-01 15:21:49.384314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.325 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:40.325 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:40.325 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:40.325 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:40.325 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.325 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.326 [2024-10-01 15:21:50.077240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.326 [2024-10-01 15:21:50.089547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.326 null0 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.326 null1 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4081510 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4081510 /tmp/host.sock 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 4081510 ']' 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:40.326 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.326 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.587 [2024-10-01 15:21:50.185871] Starting SPDK v25.01-pre git sha1 fefe29c8c / DPDK 24.03.0 initialization... 00:24:40.587 [2024-10-01 15:21:50.185940] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4081510 ] 00:24:40.587 [2024-10-01 15:21:50.250792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.587 [2024-10-01 15:21:50.325304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.159 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.159 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:41.159 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:41.159 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:41.159 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.159 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.159 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.159 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:41.159 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.159 15:21:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.159 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.159 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:41.159 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:41.159 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:41.159 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:41.159 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.159 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:41.159 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.159 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:41.420 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:41.421 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.421 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:41.421 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.421 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:41.421 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.421 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.421 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.682 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:41.682 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:41.682 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.682 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.682 [2024-10-01 15:21:51.308532] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.682 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.682 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:41.682 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:24:41.683 15:21:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:42.256 [2024-10-01 15:21:52.005792] bdev_nvme.c:7152:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:42.256 [2024-10-01 15:21:52.005814] bdev_nvme.c:7232:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:42.256 [2024-10-01 15:21:52.005827] bdev_nvme.c:7115:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:42.256 [2024-10-01 15:21:52.094107] bdev_nvme.c:7081:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:42.518 [2024-10-01 15:21:52.277805] bdev_nvme.c:6971:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:42.518 [2024-10-01 15:21:52.277828] bdev_nvme.c:6930:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:42.779 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:42.780 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:42.780 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:42.780 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:42.780 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:42.780 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:42.780 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:42.780 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.780 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:42.780 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.780 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.041 [2024-10-01 15:21:52.832725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:43.041 [2024-10-01 15:21:52.833249] bdev_nvme.c:7134:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:43.041 [2024-10-01 15:21:52.833274] bdev_nvme.c:7115:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.041 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.042 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:43.303 [2024-10-01 15:21:52.919527] bdev_nvme.c:7076:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:43.303 [2024-10-01 15:21:52.919548] bdev_nvme.c:7094:discovery_log_page_cb: *ERROR*: Discovery[10.0.0.2:8009] spdk_bdev_nvme_create failed (Invalid argument) 00:24:43.303 [2024-10-01 15:21:52.919563] bdev_nvme.c:6930:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:43.303 15:21:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:44.245 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:44.245 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:44.245 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:44.245 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:44.246 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:44.246 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.246 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:44.246 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.246 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:44.246 15:21:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.246 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:44.246 15:21:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:45.189 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:45.189 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:45.189 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:45.189 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:45.189 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:45.189 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:45.189 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.189 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.189 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:45.450 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.450 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:45.450 15:21:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:46.391 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:46.391 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:46.391 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:46.391 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:46.391 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:46.391 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.391 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:46.391 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.391 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:46.391 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.391 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:46.391 15:21:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:47.331 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:47.331 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:47.331 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:47.331 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:47.331 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:47.331 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.331 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:47.331 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.331 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:47.331 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.331 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:47.331 15:21:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:48.713 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:48.713 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:48.713 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:48.713 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:48.714 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:48.714 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.714 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:48.714 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.714 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:48.714 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.714 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:48.714 15:21:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:49.653 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:49.653 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:49.653 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:49.653 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:49.653 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:49.653 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.653 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:49.653 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.653 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:49.653 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.653 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:49.653 15:21:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:50.593 15:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:50.593 15:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:50.593 15:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:50.593 15:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:50.593 15:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:50.593 15:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:50.593 15:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.593 15:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.593 15:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:50.593 15:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.593 15:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:50.593 15:22:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:51.531 15:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:51.531 15:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:51.531 15:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:51.531 15:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:51.531 15:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:51.531 15:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.531 15:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:51.531 15:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.531 15:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:51.531 15:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.790 15:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:51.790 15:22:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:52.729 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:52.729 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:52.729 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:52.729 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:52.729 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:52.729 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.729 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:52.729 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.729 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:52.729 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.729 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:52.729 15:22:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 1 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # trap - ERR 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # print_backtrace 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1155 -- # args=('--transport=tcp') 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1155 -- # local args 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1157 -- # xtrace_disable 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.667 ========== Backtrace start: ========== 00:24:53.667 00:24:53.667 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh:122 -> main(["--transport=tcp"]) 00:24:53.667 ... 00:24:53.667 117 # we should see a second path on the nvme0 subsystem now. 00:24:53.667 118 $rpc_py nvmf_subsystem_add_listener ${NQN}0 -t $TEST_TRANSPORT -a $NVMF_FIRST_TARGET_IP -s $NVMF_SECOND_PORT 00:24:53.667 119 # Wait a bit to make sure the discovery service has a chance to detect the changes 00:24:53.667 120 waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:53.667 121 waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:53.667 => 122 waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:53.667 123 is_notification_count_eq 0 00:24:53.667 124 00:24:53.667 125 # Remove the listener for the first port. The subsystem and bdevs should stay, but we should see 00:24:53.667 126 # the path to that first port disappear. 00:24:53.667 127 $rpc_py nvmf_subsystem_remove_listener ${NQN}0 -t $TEST_TRANSPORT -a $NVMF_FIRST_TARGET_IP -s $NVMF_PORT 00:24:53.667 ... 00:24:53.667 00:24:53.667 ========== Backtrace end ========== 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1194 -- # return 0 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@1 -- # process_shm --id 0 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@808 -- # type=--id 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@809 -- # id=0 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:53.667 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:53.667 nvmf_trace.0 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@823 -- # return 0 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@1 -- # kill 4081510 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@1 -- # nvmftestfini 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:53.927 rmmod nvme_tcp 00:24:53.927 rmmod nvme_fabrics 00:24:53.927 rmmod nvme_keyring 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 4081213 ']' 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 4081213 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 4081213 ']' 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 4081213 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4081213 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4081213' 00:24:53.927 killing process with pid 4081213 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 4081213 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 4081213 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:24:53.927 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:54.187 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:54.187 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:54.187 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.187 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.187 15:22:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.097 15:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:56.097 15:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@1 -- # exit 1 00:24:56.097 15:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # trap - ERR 00:24:56.097 15:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # print_backtrace 00:24:56.097 15:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:24:56.097 15:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh' 'nvmf_host_discovery' '--transport=tcp') 00:24:56.097 15:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1155 -- # local args 00:24:56.097 15:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1157 -- # xtrace_disable 00:24:56.097 15:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.097 ========== Backtrace start: ========== 00:24:56.097 00:24:56.097 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_host_discovery"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh"],["--transport=tcp"]) 00:24:56.097 ... 00:24:56.097 1120 timing_enter $test_name 00:24:56.097 1121 echo "************************************" 00:24:56.097 1122 echo "START TEST $test_name" 00:24:56.097 1123 echo "************************************" 00:24:56.097 1124 xtrace_restore 00:24:56.097 1125 time "$@" 00:24:56.097 1126 xtrace_disable 00:24:56.097 1127 echo "************************************" 00:24:56.097 1128 echo "END TEST $test_name" 00:24:56.097 1129 echo "************************************" 00:24:56.097 1130 timing_exit $test_name 00:24:56.097 ... 00:24:56.097 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh:26 -> main(["--transport=tcp"]) 00:24:56.097 ... 00:24:56.097 21 00:24:56.097 22 run_test "nvmf_identify" $rootdir/test/nvmf/host/identify.sh "${TEST_ARGS[@]}" 00:24:56.097 23 run_test "nvmf_perf" $rootdir/test/nvmf/host/perf.sh "${TEST_ARGS[@]}" 00:24:56.097 24 run_test "nvmf_fio_host" $rootdir/test/nvmf/host/fio.sh "${TEST_ARGS[@]}" 00:24:56.097 25 run_test "nvmf_failover" $rootdir/test/nvmf/host/failover.sh "${TEST_ARGS[@]}" 00:24:56.098 => 26 run_test "nvmf_host_discovery" $rootdir/test/nvmf/host/discovery.sh "${TEST_ARGS[@]}" 00:24:56.098 27 run_test "nvmf_host_multipath_status" $rootdir/test/nvmf/host/multipath_status.sh "${TEST_ARGS[@]}" 00:24:56.098 28 run_test "nvmf_discovery_remove_ifc" $rootdir/test/nvmf/host/discovery_remove_ifc.sh "${TEST_ARGS[@]}" 00:24:56.098 29 run_test "nvmf_identify_kernel_target" "$rootdir/test/nvmf/host/identify_kernel_nvmf.sh" "${TEST_ARGS[@]}" 00:24:56.098 30 run_test "nvmf_auth_host" "$rootdir/test/nvmf/host/auth.sh" "${TEST_ARGS[@]}" 00:24:56.098 31 00:24:56.098 ... 00:24:56.098 00:24:56.098 ========== Backtrace end ========== 00:24:56.098 15:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1194 -- # return 0 00:24:56.098 00:24:56.098 real 0m24.262s 00:24:56.098 user 0m31.335s 00:24:56.098 sys 0m7.188s 00:24:56.098 15:22:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1 -- # exit 1 00:24:56.098 15:22:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # trap - ERR 00:24:56.098 15:22:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # print_backtrace 00:24:56.098 15:22:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:24:56.098 15:22:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh' 'nvmf_host' '--transport=tcp') 00:24:56.098 15:22:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # local args 00:24:56.098 15:22:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1157 -- # xtrace_disable 00:24:56.098 15:22:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.098 ========== Backtrace start: ========== 00:24:56.098 00:24:56.098 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_host"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh"],["--transport=tcp"]) 00:24:56.098 ... 00:24:56.098 1120 timing_enter $test_name 00:24:56.098 1121 echo "************************************" 00:24:56.098 1122 echo "START TEST $test_name" 00:24:56.098 1123 echo "************************************" 00:24:56.098 1124 xtrace_restore 00:24:56.098 1125 time "$@" 00:24:56.098 1126 xtrace_disable 00:24:56.098 1127 echo "************************************" 00:24:56.098 1128 echo "END TEST $test_name" 00:24:56.098 1129 echo "************************************" 00:24:56.098 1130 timing_exit $test_name 00:24:56.098 ... 00:24:56.098 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh:16 -> main(["--transport=tcp"]) 00:24:56.098 ... 00:24:56.098 11 exit 0 00:24:56.098 12 fi 00:24:56.098 13 00:24:56.098 14 run_test "nvmf_target_core" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:56.098 15 run_test "nvmf_target_extra" $rootdir/test/nvmf/nvmf_target_extra.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:56.098 => 16 run_test "nvmf_host" $rootdir/test/nvmf/nvmf_host.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:56.098 17 00:24:56.098 18 # Interrupt mode for now is supported only on the target, with the TCP transport and posix or ssl socket implementations. 00:24:56.098 19 if [[ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" && $SPDK_TEST_URING -eq 0 ]]; then 00:24:56.098 20 run_test "nvmf_target_core_interrupt_mode" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:24:56.098 21 run_test "nvmf_interrupt" $rootdir/test/nvmf/target/interrupt.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:24:56.098 ... 00:24:56.098 00:24:56.098 ========== Backtrace end ========== 00:24:56.098 15:22:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1194 -- # return 0 00:24:56.098 00:24:56.098 real 2m35.797s 00:24:56.098 user 5m23.470s 00:24:56.098 sys 0m56.592s 00:24:56.098 15:22:05 nvmf_tcp -- common/autotest_common.sh@1125 -- # trap - ERR 00:24:56.098 15:22:05 nvmf_tcp -- common/autotest_common.sh@1125 -- # print_backtrace 00:24:56.098 15:22:05 nvmf_tcp -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:24:56.098 15:22:05 nvmf_tcp -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh' 'nvmf_tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf') 00:24:56.098 15:22:05 nvmf_tcp -- common/autotest_common.sh@1155 -- # local args 00:24:56.098 15:22:05 nvmf_tcp -- common/autotest_common.sh@1157 -- # xtrace_disable 00:24:56.098 15:22:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:56.098 ========== Backtrace start: ========== 00:24:56.098 00:24:56.098 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_tcp"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh"],["--transport=tcp"]) 00:24:56.098 ... 00:24:56.098 1120 timing_enter $test_name 00:24:56.098 1121 echo "************************************" 00:24:56.098 1122 echo "START TEST $test_name" 00:24:56.098 1123 echo "************************************" 00:24:56.098 1124 xtrace_restore 00:24:56.098 1125 time "$@" 00:24:56.098 1126 xtrace_disable 00:24:56.098 1127 echo "************************************" 00:24:56.098 1128 echo "END TEST $test_name" 00:24:56.098 1129 echo "************************************" 00:24:56.098 1130 timing_exit $test_name 00:24:56.098 ... 00:24:56.098 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh:280 -> main(["/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf"]) 00:24:56.098 ... 00:24:56.098 275 # list of all tests can properly differentiate them. Please do not merge them into one line. 00:24:56.098 276 if [ "$SPDK_TEST_NVMF_TRANSPORT" = "rdma" ]; then 00:24:56.098 277 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:56.098 278 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:56.098 279 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:24:56.098 => 280 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:56.098 281 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:24:56.098 282 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:56.098 283 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:56.098 284 fi 00:24:56.098 285 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:24:56.098 ... 00:24:56.098 00:24:56.098 ========== Backtrace end ========== 00:24:56.098 15:22:05 nvmf_tcp -- common/autotest_common.sh@1194 -- # return 0 00:24:56.098 00:24:56.098 real 20m23.431s 00:24:56.098 user 43m56.298s 00:24:56.098 sys 6m27.549s 00:24:56.098 15:22:05 nvmf_tcp -- common/autotest_common.sh@1 -- # autotest_cleanup 00:24:56.098 15:22:05 nvmf_tcp -- common/autotest_common.sh@1392 -- # local autotest_es=1 00:24:56.098 15:22:05 nvmf_tcp -- common/autotest_common.sh@1393 -- # xtrace_disable 00:24:56.098 15:22:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.207 INFO: APP EXITING 00:25:14.207 INFO: killing all VMs 00:25:14.207 INFO: killing vhost app 00:25:14.207 WARN: no vhost pid file found 00:25:14.207 INFO: EXIT DONE 00:25:17.507 Waiting for block devices as requested 00:25:17.507 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:25:17.507 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:25:17.507 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:25:17.507 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:25:17.768 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:25:17.768 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:25:17.768 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:25:18.030 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:25:18.030 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:25:18.292 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:25:18.292 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:25:18.292 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:25:18.292 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:25:18.554 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:25:18.554 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:25:18.554 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:25:18.554 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:25:22.768 Cleaning 00:25:22.768 Removing: /var/run/dpdk/spdk0/config 00:25:22.768 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:22.768 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:22.768 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:22.768 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:22.768 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:25:22.769 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:25:22.769 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:25:22.769 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:25:22.769 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:22.769 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:22.769 Removing: /var/run/dpdk/spdk1/config 00:25:22.769 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:22.769 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:22.769 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:22.769 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:22.769 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:25:22.769 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:25:22.769 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:25:22.769 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:25:22.769 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:22.769 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:22.769 Removing: /var/run/dpdk/spdk1/mp_socket 00:25:22.769 Removing: /var/run/dpdk/spdk2/config 00:25:22.769 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:22.769 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:22.769 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:22.769 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:22.769 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:25:22.769 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:25:22.769 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:25:22.769 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:25:22.769 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:22.769 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:22.769 Removing: /var/run/dpdk/spdk3/config 00:25:22.769 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:22.769 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:22.769 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:22.769 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:22.769 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:25:22.769 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:25:22.769 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:25:22.769 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:25:22.769 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:22.769 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:22.769 Removing: /var/run/dpdk/spdk4/config 00:25:22.769 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:22.769 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:22.769 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:22.769 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:22.769 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:25:22.769 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:25:22.769 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:25:22.769 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:25:22.769 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:22.769 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:22.769 Removing: /dev/shm/bdev_svc_trace.1 00:25:22.769 Removing: /dev/shm/nvmf_trace.0 00:25:22.769 Removing: /dev/shm/spdk_tgt_trace.pid3722069 00:25:22.769 Removing: /var/run/dpdk/spdk0 00:25:22.769 Removing: /var/run/dpdk/spdk1 00:25:22.769 Removing: /var/run/dpdk/spdk2 00:25:22.769 Removing: /var/run/dpdk/spdk3 00:25:22.769 Removing: /var/run/dpdk/spdk4 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3720449 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3722069 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3722699 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3723908 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3724073 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3725374 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3725383 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3725846 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3726977 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3727623 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3727982 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3728334 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3728707 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3729062 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3729412 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3729768 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3730147 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3731226 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3734826 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3735194 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3735558 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3735814 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3736265 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3736379 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3736976 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3737025 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3737377 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3737704 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3737807 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3738079 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3738532 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3738883 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3739281 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3743853 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3749188 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3761236 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3761928 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3767677 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3768151 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3773295 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3780375 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3783479 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3796003 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3807007 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3809065 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3810084 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3831665 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3836424 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3891920 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3898312 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3905472 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3912706 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3912709 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3913712 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3914716 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3915724 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3916394 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3916405 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3916736 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3916810 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3916931 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3917993 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3918996 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3920049 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3920655 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3920774 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3921033 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3922494 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3923896 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3934369 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3970308 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3975730 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3977715 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3980050 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3980367 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3980549 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3980758 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3981477 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3983515 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3984601 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3985288 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3987909 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3988706 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3989428 00:25:22.769 Removing: /var/run/dpdk/spdk_pid3994483 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4001183 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4001184 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4001185 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4005865 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4016690 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4021690 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4028717 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4030217 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4031853 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4033585 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4039344 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4044324 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4053364 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4053476 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4058525 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4058671 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4058875 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4059447 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4059541 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4064923 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4065752 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4071503 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4074844 00:25:22.769 Removing: /var/run/dpdk/spdk_pid4081510 00:25:22.769 Clean 00:28:44.494 15:25:47 nvmf_tcp -- common/autotest_common.sh@1451 -- # return 1 00:28:44.494 15:25:47 nvmf_tcp -- common/autotest_common.sh@1 -- # : 00:28:44.494 15:25:47 nvmf_tcp -- common/autotest_common.sh@1 -- # exit 1 00:28:44.505 [Pipeline] } 00:28:44.517 [Pipeline] // stage 00:28:44.522 [Pipeline] } 00:28:44.534 [Pipeline] // timeout 00:28:44.539 [Pipeline] } 00:28:44.542 ERROR: script returned exit code 1 00:28:44.542 Setting overall build result to FAILURE 00:28:44.553 [Pipeline] // catchError 00:28:44.556 [Pipeline] } 00:28:44.566 [Pipeline] // wrap 00:28:44.571 [Pipeline] } 00:28:44.579 [Pipeline] // catchError 00:28:44.586 [Pipeline] stage 00:28:44.587 [Pipeline] { (Epilogue) 00:28:44.596 [Pipeline] catchError 00:28:44.597 [Pipeline] { 00:28:44.605 [Pipeline] echo 00:28:44.606 Cleanup processes 00:28:44.611 [Pipeline] sh 00:28:44.894 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:44.894 4133859 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:44.908 [Pipeline] sh 00:28:45.194 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:45.194 ++ grep -v 'sudo pgrep' 00:28:45.194 ++ awk '{print $1}' 00:28:45.194 + sudo kill -9 00:28:45.194 + true 00:28:45.207 [Pipeline] sh 00:28:45.510 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:53.656 [Pipeline] sh 00:28:53.943 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:53.943 Artifacts sizes are good 00:28:53.958 [Pipeline] archiveArtifacts 00:28:53.964 Archiving artifacts 00:28:54.272 [Pipeline] sh 00:28:54.634 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:54.674 [Pipeline] cleanWs 00:28:54.706 [WS-CLEANUP] Deleting project workspace... 00:28:54.706 [WS-CLEANUP] Deferred wipeout is used... 00:28:54.714 [WS-CLEANUP] done 00:28:54.716 [Pipeline] } 00:28:54.733 [Pipeline] // catchError 00:28:54.742 [Pipeline] echo 00:28:54.744 Tests finished with errors. Please check the logs for more info. 00:28:54.747 [Pipeline] echo 00:28:54.748 Execution node will be rebooted. 00:28:54.762 [Pipeline] build 00:28:54.765 Scheduling project: reset-job 00:28:54.779 [Pipeline] sh 00:28:55.066 + logger -p user.info -t JENKINS-CI 00:28:55.076 [Pipeline] } 00:28:55.090 [Pipeline] // stage 00:28:55.094 [Pipeline] } 00:28:55.107 [Pipeline] // node 00:28:55.112 [Pipeline] End of Pipeline 00:28:55.156 Finished: FAILURE